Sunsetting Social Media and the Dawn of Group Chats

We live in a time where apparently Elon Musk and Mark Zuckerberg are running a tight race of who can kill their own social media network faster than Rupert Murdoch did with MySpace and who can lose more money in the process than Yahoo! did with tumblr. Musk and Zuck are being supported in their quest by the unlikliest of partners: Discord, Telegram, Whatsapp and various governments. So what is happening?

Social media is connecting us

Social media – and stop me if you heard this one before – brings us one main benefit: Staying connected with friends, family and classmates, and finding new friends. This benefit very clearly serves a need within all of us, as becomes evident when you attempt to leave a social network: Staying connected becomes much harder and takes lots of effort in 1:1 conversations to keep the connection alive.

Yet, at the same time, social media has become a battleground of attention. It’s a free-for-all between advertisers, influencers and your friends, and your friends are hopelessly outmatched. Facebook is a marketplace competing with ebay, YouTube is competing with TV stations on ad money, and inbetween it all are influencers, opinion leaders and content creators, trying to get their voice heard and their art seen.

And it’s tiring.

Social media is dividing us

This battleground of attention is particularly nasty when it comes to politics. There is no discussion, there never has been. It’s a strawman building contest, the “other side” needs to be vilified, and the most outrageous believable claims are typically the most viral. Social media sites intentionally or unintentionally support this behavior: Enragement = Engagement. And once the villain is constructed, it can be justifiably attacked, either with hate online or with hammers, guns, explosives or incendiaries offline. Throwing soup, cakes and paint suddenly is one of the least worrying outcomes in context.

But remember, the battleground of attention isn’t actually why we are on social media sites! It’s a byproduct of the technical availability of it. It’s a fluke, caused by a product manager many years ago figuring out that by adding a “share publicly” option, social media also can take over the blogosphere.

Social media is dying

During the rise of social media sites, there was nothing which could connect us in just the same way. IRC and email were cumbersome and ill-suited to picture sharing. SMS and MMS got expensive real fast, and the very thought of using mobile data when it wasn’t strictly necessary burnt a hole in our pockets.

This situation has changed. Group chats and communities exist, on Discord, Signal, Telegram, Whatsapp and more, and they help us stay connected with friends, family, classmates and find new friends. Social media is obsolete.

Group chats are the future

Group chats don’t need to participate in the battleground of attention. Should an influencer invade a space and start promoting products, an opinion leader appear and always and exhaustively talk about the same issues, or a new parent share baby pics excessively you can just make a new group chat with everyone you care about, and without this annoying person. The original group chat may grow quieter or die altogether, but at no point did anyone need to interfere, kick someone, or hurt anyone’s feelings.

Group chats also facilitate the one good thing about Google+: social circles. It’s not just possible but naturally occurring that you share stuff with only the people you know will care about it – you ask your question on how to draw perspective in your art group chat, share the news that you just broke up in your close family group chat first, and geek out about model trains in a model train group chat.

To me, the group chat apps of today – especially discord and signal – have completely replaced social media as a way to stay connected. To me, twitter now is assuming the role of YouTube for most intents and purposes: It’s just media now. Media which I passively consume and sometimes create, the battleground of attention.

A battleground that no longer is a source of social connections.

Mastodon is not the solution, but yet another problem

We all know that big tech has a problem, from unfair policies to monopolistic bullying behavior. Back around new year 2018, I finally had enough and made an account on Mastodon – a federalized, not-for-profit-but-for-good kind of Twitter alternative. I’d be a trendsetter, I invited all my friends, some of which joined as well – but very quickly I was back on twitter again, anyway. What happened?

I didn’t know back then, but I think I do now. Part of the problem was the network effect, with just more interesting people being on Twitter than Mastodon, but the other part is what I want to talk about here:

Mastodon has some inherent structural problems.

  • Instances are fragile & exploitable
  • Trust & safety is awful
  • The UX gets sacrificed for band-aid fixes.

Let’s tackle these one by one.

Fragile Instances

Mastodon runs on the idea that there is no central server, but instead a federated bunch of servers owned and operated by random people on the internet (including yourself, if you want to). This “fediverse” is somewhat interoperable, so you can follow and talk to people from other instances. There’s some caveats to this, to which we come later.

“random people on the internet” doesn’t sound trust-inspiring, and this is because it isn’t. A quick scroll to shows that some 73% of the instances listed have been shut down again. Why? Probably for the same reasons why most personal website projects die: Lost interest, too expensive, too time-consuming to maintain. If you host your own instance this is fine – it’s yet another personal project after all – but if you have users, this is a problem: As a user, I cannot count on my social media profile to still exist tomorrow. This is in stark contrast to any of the more established social media websites and honestly, even the most chaotic of startups, where you generally can count on being told that they’re closing shop a few weeks before the lights go out.

No trust, no safety

Trust & Safety (TnS) is a catch-all term for the teams at websites that write and enforce community guidelines, combat fake news and spam bots, moderation and stuff like that. Since Mastodon is federated, this team often consists of one person: The instance owner.

I have had my fair share of volunteer TnS work over the years, and I can tell you: This stuff is very time- and soul-consuming. The community you run has certain expectations on what content is and isn’t shown on the server and will both yell at you if your rules and enforcement is too strict, and also if someone broke the rules while you were asleep and it was able to stay up for a few hours. For more subject-focused matters it’s often somewhat more forgiving – very few people will show up on a model train subreddit or discord server with the intent to post anything but model trains. But Mastodon generally doesn’t work like this, rather, you, as a user, choose your home instance (possibly the same one your friend uses) and once you have it, you post whatever. And “whatever” ranges from porn to gore to CSAM – child sexual abuse material.

In which case you, as the instance owner, already are in hot water, hosting CSAM will get the cops to your door sooner rather than later. For companies like Facebook and Twitter, this is part of their calculations. They can hire content reviewers and – in theory anyway – take steps to ensure that these people don’t break from constantly watching the worst part of humanity. For a mastodon instance, the best you can do is get volunteer moderators – untrained, unaware how bad it can become – and hope for the best. Or shut down the instance once it becomes unbearable.

Blanket banning

There is a small ray of light for the TnS matter though: Likeminded people tend to be on the same instances. By simply blocking any interactions from an entire instance, an instance owner immediately can get rid of a large chunk of potentially problematic users…. Or the entire country of Japan. The owner of one instance I’m on felt compelled to essentially block all mastodon instances ending in .jp to not have to look at lolicon content – sexual drawings of young girls, something legal in Japan and some other countries, but deeply illegal in many others.

This kind of blanket banning has some degrees of severity – maybe images from these instances won’t be served, maybe posts from these instances won’t be shown unless you follow someone, maybe all interactions are banned. Whatever setting the instance owner chooses, it directly affects the experience of the users, from “I have to leave my timeline to look at this image” to “I actually need to have a second account on another instance to interact with a friend”.

User experiencen’t

Mastodon, being an open source thing, of course comes in many forms and colors. Some instances try to emulate twitter’s (now: old) design, some try to emulate tweetdeck, some instagram, some are non-browser-based standalone apps, and so on. But as far as I can tell, they all have in common that they leak abstractions – especially this be language about “instances” when moving accounts, or usernames being @username@instance.tld. It also doesn’t feature a real search function (if you want random people to find your content, use a hashtag) or a quote-retweet equivalent (because it encourages people yelling at each other). You can’t even just join mastodon, you first have to jump through the hoops of understanding what instances are and then doing even more research to find which one suits you.

In the face of the aforementioned I can understand some of these decisions, sort of. Alas, I don’t think they’re particularly good decisions. There are tools which can be used for TnS in a federated system – shared blocklists, a CSGO-overwatch-like system, and more – which would do a better job than the current systems. Putting UX last is ultimately what made me stop using Mastodon:

  • It’s hard to sign up and get friends to sign up.
  • It’s hard to find interesting things.
  • It’s hard to share interesting things you find and add commentary.
  • And all in all: It’s hard to have fun.

Sport und Gewalt

Ein Gedanke, der mir seit Jahren immer mal wieder im Kopf herumschwirrt, ist folgende Positionierung des Deutschen Olympischen Sportbunds zum Thema eSports:

Eine weitere Entscheidungsgrundlage war der Inhalt der Spiele und die entsprechende Darstellungsform am Bildschirm. In vielen Spielen ist die Vernichtung und Tötung des Gegners das Ziel des Spiels. Insbesondere die deutlich sichtbare und explizite Darstellung des Tötens von virtuellen Gegnern ist mit den ethischen Werten, die wir im Sport vertreten, nicht vereinbar.

Ohne da jetzt tief aufzudröseln, ist der Gedanke hier verständlich: Wenn in CS:GO die Terroristen Bomben legen und Kopfschüsse verteilen, ist das eine wesentlich andere Hausnummer als wenn Bayern München einen Luft-und-Gummi-Ball in ein Netz schießt. Allein schon vom Jugendschutzgedanke ist das eine ganz schlechte Idee – und selbst CS:GO-Spieler wollen bestimmt keine Jugendteams voll mit 8- bis 13-Jährigen in ihren Pubs sehen.

Wenn man aber diesen Gedanken ein bisschen weiter verfolgt, öffnet sich schnell eine ethisch komplexe Thematik, nämlich die der Gewaltdarstellung in olympischen Sportarten. Eine ganze Reihe von Sportarten basiert zu großen Teilen auf schwierigen und teilweisen heute verbotenen Praktiken. Zum Beispiel:

  • Fechten ist Form des Duells. Duelle waren bis vor “kurzem” (19. Jh) ein Weg, seine Ehre wiederherzustellen, in dem man den Ehrekränker (also jemand, der dich beleidigt o.ä. hat) auf faire Weise im Duell bekämpfte und ggf. verletzte oder tötete.
  • Diverse Formen des Schießens (Bogenschießen, Pistolenschießen, Biathlon etc.) funktionieren als eine Form der Soldatenausbildung, einzig das Ziel muss durch feindliche Köpfe ausgetauscht werden. Biathlon insbesondere basiert auf der Sportart (?) “Militärpatroullie“, bei der 4 Athleten 30km auf Ski unterwegs waren und auf halber Strecke mit 18 Schuss pro Nase auf Zielscheiben schossen.
  • Der moderne Fünfkampf basiert auf schwedischer Soldatenausbildung. Die Disziplinen Fechten, Pistolenschießen, Schwimmen, fremde Pferde reiten und Laufen repräsentieren ganz gut, was man so erwarten kann, wenn man sich hinter die Feindeslinie gekämpft hat, die Munition ausgegangen ist und man auf geklauten Pferden wieder zurück will.

Natürlich haben Athleten dieser “PvZielscheibe”-Sportarten nie die Absicht oder die Illusion, jemanden zu töten. Und auch in der PvP-Abteilung sind KOs gewünscht und Hirnverletzungen und längerfristige neuropsychiatrische Erkrankungen geduldet, aber getötet werden soll keiner. Gleichzeitig ist eSportlern ebenfalls bewusst, dass der Headshot in CS:GO nicht vergleichbar ist mit der Tötung eines echten Menschen.

Ich könnte an dieser Stelle noch eine ganze Weile weitermachen, mit weiteren Gegenargumenten gegen die DOSB-eSport-Entscheidung, oder einer weiteren Analyse des DOSB Ethik-Codes (der, Überraschung, nichts von Gewaltdarstellungen oder Tötungen erwähnt), aber das alles wäre politische Diskussion.

Viel mehr interessiert mich dieser ganze historische und ethische Komplex von Sport und Gewalt. Warum basieren so viele Spiele und Sportarten auf Gewalt? Auf Grund von Tribalismus? Wenn ja, warum werden diese Sportarten nicht weiter hinterfragt? Wollen wir als Gesellschaft nicht vom Tribalismus weg kommen? Müssen Sportarten hinterfragt werden? Und so weiter.

Ich habe noch keine Antworten auf diese Fragen. Vielleicht gibt es irgendwann einen 2. Teil hierzu, vielleicht inspiriert er Leser:innen zur weiteren Recherche. Ich würde mich auf jeden Fall zu weiteren Infos hierfür freuen.

The Work of Art in the Age of NFTs

Every time I see NFTs in the context of “making digital art unique/owning art”, I have to think of Walter Benjamin’s „The Work of Art in the Age of Mechanical Reproduction” from 1935. TL;DR is you just need to replace „mechanical“ with „digital” and you’re done.

The context for Benjamin’s essay is the rise of photography. Photography had existed for a long time before, but in the early 20th century, it had started to become prevalent everywhere. Photography certainly is an art form on its own, but it’s got one problem:

There isn’t really an „original” photograph you can look at in a museum. If you want to look at it, you first need to make a photoprint. But the process of making just one or 100 is the same. Are they all originals? All copies without an original?

For traditional art, it’s much easier: The original, authentic artwork exists in the „here and now“, in one location and only once. And it has a history (who prayed to it/owned it/how it was used/…). Benjamin calls this “Aura”.

This Aura is what makes you want to go to the Louvre to see the Mona Lisa from very far away. Benjamin compares this Aura to real life: Imagine sitting outside, mountains in the distance, leaves throwing shadows on your face and suddenly a squirrel hushes past you.

Now imagine the same thing in a movie or video game. The mountains are polygons, the squirrel no longer is chance, it’s scripted. Even the most perfect reproduction won’t have an Aura anymore. It may be immersive, but it won’t be authentic.

A manual copy of the Mona Lisa would simply be a fake, it had a much different history. But a mechanical/digital copy is different: It’s somewhat independent (you can crop/zoom to highlight parts), and it can access new places (ie your home).

When the original degrades (eg because I cut across it with a knife), it loses it’s authenticity and authority to the copy, eg a photograph: Suddenly you start looking at the copy and say “this is how this sadly destroyed artwork originally looked like”.

(This is where Benjamin has a very interesting detour to cult value vs exhibition value and how that’s shifted, I’ll skip it here.)

Anyway, NFTs. The big question is, can a proof of ownership restore the Aura of authenticity for digital art? The answer is a resounding “no”.

Just like a photograph, there never has been an “original”. Even if you’re the artist who saved the PSD half a second ago, you now have like 5 copies of it already: In your RAM, on your hard drive, in the CPU/GPU cache, on your screen, and if you do automatic backups, in a cloud.

The NFT’d artwork you buy won’t be in the “here and now”. It’s not with you, it’s somewhere on the internet, either as a classical URL or on IPFS. Copies, each of which as valid as yours, are sent to everyone who wants to see it.

Even if you somehow end up with no copies viewable to anyone else: Attach a second monitor to your computer and duplicate the display. Now you have two equally valid copies of it you can look at.

Owning a digital-art-NFT is very different to owning physical art. If anything, it’s as meaningful as getting copyright licenses, but even then, the TOS of NFT trading places give you rather crappy licenses.

FoundationApp for example forces creators to give up a “non-exclusive, world-wide, assignable, sublicensable, perpetual, and royalty-free license“

And if you buy it, you get a non-commercial „limited, worldwide, non-assignable, non-sublicensable, royalty-free license to display“

That’s right: If you buy an NFT from, you can’t even do with the thing as you please. They go on to allow you to share it to say “this is mine”, but you can’t use it in a monetized YouTube video or twitch stream. Buying NFTs is this useless.

Pro tip: If you want a digital artwork exclusive for you, commission an artist. With that, you get to be part of the creation of something truly new and support an artist both monetarily and in improving their skills, and you generally can use your commission however you want.

Die Digitale CD

Mama will ihrer Freundin eine CD schenken. Problem: Die gibt’s nur noch digital. Nichts einfacher als das, denk ich, wir laden die einfach runter und ziehen die auf einen USB-Stick.

*Edward A. Murphy lächelt müde*

Zum ersten Mal in meinem Leben ist das Problem nicht digitaler Natur. Der Download klappt und spuckt eine ZIP aus. Die Dateien sind einfach MP3s. Fantastisch.

Also noch fix zum Kvickly um quickly ‘n kleinen USB-stick zu fangen. Kvickly ist groß, sie haben bestimmt 300m² allein für Kleidung. Ich schlendere durch die Gänge, vorbei am Gemüse, an den Pfannen, an Lego Ma– LEGO MARIO?!?!


… und zum Elektronikregal. Eine Hälfte ist besetzt mit Glühbirnen. Ein Viertel ist voller Druckerpatronen. Doch dazwischen sind sie, USB-Mäuse, -Tastaturen, -Kabel, -Powerbanks, -Autoadapter, Handyhüllen und… das war’s. Gut, guck vielleicht ist an den Enden ja noch was.

Einmal zum Ende geschlendert, Batterien in allen Größen und Formen. Na, dann muss es wohl am anderen sein.

Nasenhaartrimmer und Ohrenschmalzschnecken.

Währenddessen ist Mama mit ihren Einkäufen fertig. Okay, hier wohl nicht. Wir gehen zur Kasse, und da! Mehr USB-Krams! UND EINE SPHINX MIT TITTEN!

Doch bei dem USB-Krams (Drahtlosladegeräte, Powerbanks für Radfahrer, Kabel ohne Ende) ist wieder kein USB-Stick dabei. Na dann. Mama, keiner sozialen Interaktion scheu, fragt noch mal die Verkäuferin. Sie sagt, “natürlich haben wir USB-Sticks!”, und geht schnurstraks auf das Regal zu, wo die ganzen Kabel waren.

“Meinst du nicht das?” – “ne, die sollen Daten halten können.” – “Mobile Daten?” – “Speicherplatz! 8GB oder so” – “Aah, ne, sowas ham wa nich”.


Nun denn. Wir sind ja im Borgen, dem großen Einkaufszentrum in Sønderborg. Hier gibt’s alles!

Ein Schreibwarengeschäft und Handyverkäufer später lässt mich diese Hoffnung korrigieren:

Hier gibt’s alles! Außer USB-Sticks!

Auf dem Weg nach draußen dämmert es mir so langsam, dass ich genau so gut nach Kassettenrohlingen hätte fragen können. Sind USB-Sticks etwa schon veraltete Technik?

Auf dem Weg nach Hause kommen wir an einem Computerreparaturshop vorbei. Und Hurra, er hat USB-Sticks! 128 GB für 500 DKK/67€?!! Die Dinger gibt’s auf Amazon für <20! und ich brauch nur genug für ‘ne CD, also höchstens 800MB.

Moment mal. CD?

Ich frag Papa. Er glaubt sich, dunkel zu erinnern, dass wir noch Rohlinge haben. Hurra! Und tatsächlich, Papa holt eine Spindel raus. Ich nehme mir eine, und… ach ja. Mein Laptop hat kein CD-Laufwerk mehr.

Gut, nehm ich einfach denen der Elter… doch möglicherweise hatte ich die letztes Jahr auch schon auf moderne Dinger upgraded. Papas Arbeits-PC? Hat das Ministerium auch upgraded. Alle haben sie kein CD-Laufwerk mehr.

Nach langer Suche findet sich endlich der alte Laptop. Frisch mit Windows 8 und CD-Laufwerk. Hurra! Ich lege den Rohling ein und starre auf den Bildschirm. Wie ging das noch gleich mit CDs brennen? Da war doch was mit Audio vs Daten-CD?

Tatsächlich geht’s mitm Windows-Explorer. Einfach Daten rüberziehen, als wär’s ‘n USB-Stick, dann Finalisieren, Audio-CD wählen und WARUM HAT SICH DER WINDOWS MEDIA PLAYER GEÖFFNET? Also noch mal auf Brennen drücken und… Das war’s?

Das Ding brummt ein bisschen rum und spuckt die CD wieder aus. Gleich mal gucken ob’s funktioniert. Im Windows Media Player tut’s es auf jeden Fall.

Und in einem richtigen CD-Spieler? Ich versuche, die Anlage anzuschalten…

…und der Schalter bricht ab.

Ich glaub, wir verschicken die CD einfach as-is und sagen “wenn’s nich geht, schicken wir dir die ZIP per Email”

ZIP is not a good measure of lyrical complexity

The following paper recently came to my attention:

Varnum MEW, Krems JA, Morris C, Wormley A, Grossmann I (2021) Why are song lyrics becoming simpler? a time series analysis of lyrical complexity in six decades of American popular music. PLoS ONE 16(1): e0244576. doi:10.1371/journal.pone.0244576

It attempts to analyze lyrical complexity of top 100 songs and correlate it to their success, socio-economical factors, and so on. I am not really qualified to talk about most of the work they are doing (they all are from psychology departments and talk about what probably are psychology things), but as an ex-computer science student, current multimedia production student and a hobbyist writer, I do feel qualified to talk about this line in their methodology specifically:

Compressibility indexes the degree to which song’s lyrics have more repetitive and less information dense, and thus simpler, content. We used a variant of the established LZ77 compression algorithm.

LZ77 is an ancient compression algorithm from 1977 (hence the name). It’s the granddad of the modern deflate algorithm used to compress webpages, PNGs, ZIPs, PDFs, ODTs, DOCXs, and so on. The authors correctly identify:

We used the LZ77 compression algorithm because of its intimate connection to textual repetition. Most of the byte savings when compressing song lyrics arise from large, multi-line sections (most importantly the chorus, and chorus-like hooks).

The words “byte savings” already is hinting at what the problem here might be. Because, yes, if your lyrics repeat the same thing over and over again (and to be fair, pop songs often do), and if you ZIP it up, it will take up less space on your disk and yes, in information theory, the song would be less complex.

But we as listeners aren’t really interested in information theory and degrees of compression. If anything, we might be interested in whether the lyrics go for a very simple rhyme, or a combo that’s been heard hundreds of times before (house → mouse, fire → desire, heart → apart, etc – rhymezone is very useful to find common pairings), or for one you don’t see coming (eg. Madvillain’s Meatgrinder “trouble with the script → subtle lisp midget”). The ZIP algorithm won’t be able to tell the complexity of the rhymes apart, it only can judge whether or not words or phrases are literally repeating.

And even that isn’t necessarily a good metric to judge complexity. Take the lyrics of Rammstein’s Du hast for example:

Du hast
Du hast mich
Du hast mich
Du hast mich gefragt
Du hast mich gefragt
Du hast mich gefragt und ich hab’ nichts gesagt

This is some ZIP-tastic lyrics and proof that these lyrics are simple – except they aren’t. This is a wordplay on “du hast” (you have) and “du hasst” (you hate). If you hear these lyrics, you’re constantly trying to decypher which of the two meanings this hast/hasst they’re talking about, and the four (!) “Du / du hast / du hast mich” repititions before the song even gets to the verse quoted above make it a very cognitively engaging, and, dare I say, complex song up to that point, just by repeating an unclear phrase.

So, we have established that any conclusions drawn from ZIP-ping up song lyrics are shaky at best, I have another question:

Why, why, why a ZIP algorithm?

It is beyond me why the first thing you’d head to when tasked “measure whether new songs are simpler” is LZ77, or any kind of compression algorithm. Compression algorithms will look at substrings, so h[ouse] and m[ouse] would be better to compress as a pair than ho[use] and ca[use], because the repeated substring is longer. But house, mouse, cause are all just 5-letter-words which (vaguely) rhyme, so there’s no reason to count one pairing more or less complex than the other.

And it’s not like there aren’t metrics which are designed to look at this problem in particular: Lexical Diversity Indices exist, here’s a paper describing all their differences, doi:10.3758/BRM.42.2.381. And even that paper admits:

In sum, all textual analyses are fraught with difficulty and disagreement, and LD is no exception. There is no agreement in the field as to the form of processing (sequential or nonsequential) or the composition of lexical terms (e.g., words, lemmas, bigrams, etc.) […] In this study, we do not attempt to remedy these issues. Instead, we argue that the field is sufficiently young to be still in need of exploring its potential to inform substantially.

So even when analyzing with an algorithm designed to measure lexical diversity, it still would run into trouble, especially when being ran in the “full auto” mode that is necessary to classify tens of thousands of texts.

The research already has been done

Varnum et al. fail to acknowledge the research of Isaac Piraino, published at least a year prior to theirs. Piraino took 450k song lyrics (as opposed to Varnum et al.’s 15k), filtered to only include lyrics above 100 words (because short lyrics necessarily are more diverse; you first need to write a word before you can repeat it), and measured them with MTLD (a metric actually designed to measure lexical diversity).

Piraino’s findings: MTLD peak in the 2000’s
Varnum et al.’s findings: Steadily rising compressability.

Piraino hypothesizes:

My theory is that the gradual decrease in the popularity of rock music and increase in popularity of hip-hop explains the upward trend to the end of the 90s. Rock music, although complex in different ways, usually has a more simple vocuabulary than its lyrically dense hip-hop counterpart. My theory for why it went back down after the 90s is that hip-hop has slowly been transforming into pop music in combination with the rise in popularity of EDM. […] EDM typically has a handful of catchphrases that are repeated over and over again.


Varnum et al. acknowledge that “Songs might be complex or simple in other ways as well, in terms of rhythm, melody, number of instruments played, and so on.” But since their methodology is so shaky, and their results seem to contradict other research, I’d be very, very careful to even try to draw any conclusions from this. Or really, most things which try to algorithm away at huge datasets and then try to explain the most intricate and inter-connected thing humanity has to offer, culture, with it. Overall, it reminds me of the “timbre paper” floating around, which tries to measure musical quality by how much timbre it has (and got torn apart over it):

The EDE model: Exploring, Developing and Established Creators


A while back, I posted a thing about “Why Grinding is bad for you” on r/youtubegaming, where I encouraged gaming creators to try different formats, instead of going for the first thing which comes to mind, which quite often is just a Let’s Play. To aid this discussion, I developed the EDE-model, which I wanted to expand on here.

The basic gist of the EDE model is that creators who just are starting out have much more freedoms on what they can do than big channels.

1. The Exploration Stage

At this first stage, a creator just made a channel with the intend to upload something, starting from 0 subscribers, 0 views and 0 videos, or something very close to this. This crucially means the following:

  • Nobody has any expectations on what this channel is going to upload. Because of this, the creator has the tough fate of complete creative freedom where they can do anything.
  • Typical channel recommendations (“upload on a schedule! stick to formats! consistency is king!”) aren’t really applicable yet, because they’re strategies which optimize for existing subscribers and thus require some degree of following to be effective.

My advice for creators at this stage would be to try anything that’s vaguely interesting to them. To not get started doing regular formats and series just yet, but just try everything they always wanted to try. To create as if view counts and subscriber counts don’t exist.

This freedom is not something which you really get later on in the process, at least not without alienating vast portions of your audience.

2. The Development Stage

At this stage, the creator probably has made a few dozen videos (depending on the type of content and effort which went into each individual video), and figured out which kinds of content they want to do more of, as well as which kinds of content they don’t like doing. With the experience they’ve gathered in the Exploration stage, they probably also have considerably better video making skills and equipment than in the very beginning, and possibly already have gotten feedback from friends and family on which videos were nice to watch and which ones didn’t work out as intended.

Based on this, the creator now can start transitioning towards doing what established channels do, namely:

  • Find a niche to be in
  • Develop formats and serial content which can be uploaded on a regular schedule
  • Start putting more care into marketing, ie SEO and good thumbnails/titles

If a developing creator and finds their initial niche to be a dead end for whatever reason – too much effort per video, copyright trouble, getting bored of it – it’s completely fine to go back to exploring other options. This is where it comes in handy to have had this exploration stage beforehand, so they already know what they’d also want to do and come up with a somewhat thought out plan on how to transition between the niches.

But, if you’ve found your idea to be sustainable and fun, you can continue on your path and eventually reach…

3. The Established Stage

At this stage, the creator has probably made hundreds of videos, and is decently well known in their niche. This also is the stage where fans start to become a significant force, be it for promotion, merch sales or patreon stuff. Micro-optimizations can become surprisingly powerful here.

Since their channel probably generates some decent amount of money one way or the other, the creator can invest into the channel much more, be it through buying better equipment, dedicating more time to the channel that they otherwise would be working on a “real” job, or getting opportunities which smaller YouTubers just don’t get. Note though that the money doesn’t come on its own, but drags a whole tail of bureaucracy behind it.

The niche they live in is pretty set in stone and difficult to escape from without losing a lot of attention from subscribers. That said, it sometimes can be very necessary to pivot even as an established creator, eg. if the niche they’re in is very small and/or shrinking, causing the channel to stagnate. Further, because the fans and subscribers have very strong expectations of the channel, it can become increasingly difficult to meet these expectations.

Which isn’t to say that an established creator has a worse fate than someone in one of the other stages; there’s a reason why all the bigger YouTubers can be found in this category. It’s just that it comes with a different set of challenges than a small one, so it’s not like the moment you become established, all your trouble will go away.

Why this model can be useful

Often, creators who start out have a fairly concrete idea of what they want to do, so they skip the exploration stage and then go straight for the development stage. And while this may work, it often times leads to this “small YouTuber mentality”, in which the creator “grinds” out videos day after day or week after week, without getting anywhere, and the advice from peers being “just keep at it, do these micro-optimizations and hope that the algorithm picks you up eventually”.

The problem I have with this mentality is that it reduces something which can be very much fulfilling – video production and the creative process in general – into a 9-5 kinda job in which the modus operandi is “preservere against the odds”, and this job doesn’t even pay well.

My hope is that this model encourages people to pursue extreme levels of creativity at first, and once they know where their creative preferences lie, start making a channel geared towards success.

The SEE–NTS Model. A better model for Online Video Programming.


The Hero–Hub–Help model which YouTube developed in 2014 has been a helpful tool for video marketeers to help them understand what they can do on YouTube. Namely:

  • Hero content is big events, which you can advertise in a big way. It gets huge attention on the day it’s happening, and then quickly becomes uninteresting again, such as the E3 presentations.
  • Hub content is regularly scheduled content, to keep subscribers (and viewers you’ve reached through the other content) interested in your channel. This content gets watched by your subscribers in the first couple days after upload, and then basically never again.
  • Help content (originally named: hygiene) is helpful content teaching users how to do stuff, ie tutorials. This content gets found at any time via search, but doesn’t add much value to subscribers to your channel.

Now, this model kinda makes sense if you have a product you’re making videos about. But it kinda breaks down once you put it into the context of a normal YouTuber: It doesn’t make sense to make a big event which only is relevant for a week, so Hero content is out. Hub content is more in line to what YouTubers do, but YouTubers do so much more than make videos which just are consistent and appeal to their current subscribers.

So, out of this model, only a few bits actually are usable for YouTubers, and even these only are so with caveats. So I thought about it a bit and came up with a new model instead:

The SEE–NTS Model

SEE-NTS is short for the following aspects:

The SEE-NTS model can be thought of as 3-dimensional space.
Also, I like to pronounce it as “sea ants”.
  • Subscriber Content. Ie content made primarily for subscribers, featuring funny in-jokes, references to previous videos, stories that make the creator more relatable to their fans and such.
  • Evergreen Content. Ie content which will stay relevant to the world for the (forseeable) future.
  • Event Content. Ie content which is tied to certain events.

— with their counterparts —

  • New Viewer Content. Ie content which is accessible and fully understandable to someone who never has seen any of your content before.
  • Timely Content. Ie content which is relevant during a specific window of time only, and then basically never again, eg news.
  • Serial Content. Ie content which you can sure you’ll see more of next week anyway.

The individual aspects make predictions on whether the view distribution will be flat over time, or have a spike shortly after publication:

  • Subscriber content is watched by subscribers, so it’ll get most of it’s views within the first week of publication, while New Viewer content may get discovered by potential new subscribers at any time.
  • Timely content is only relevant shortly after publication, after which it’s old news. Evergreen content is ever relevant.
  • Event content is most watched during the event (→ Tentpoling), while Serial content is watched all year round.

As such, the model explains why Hero-Hub-Help makes the predictions that it does: Hero content is minmaxed for spikeyness (Event/Timely, with a lot of advertising thrown at it so that talking about the Subscriber/New Viewer axis kinda is pointless), Hub content is Subscribers/Serial content (and doesn’t nearly spike as high), and Help content is minmaxed for flatness.

SEE-NTS also allows for other content to be categorized sensibly:

  • Mr Beast’s content is no doubt Serial (it’s not really a surprise what he’ll do next), but features some Event-like qualities (he basically makes his own events in each video by giving away a lot of money). His videos are accessible for New Viewers, yet appeal for Subscribers as well. And the stunts he pulls generally age well, so: His content sits pretty much in the middle and manages to more or less cover all bases.
  • A band doing a concert live stream is an Event for everyone who already knows the band (ie Subscriber-ish), but since music doesn’t really get outdated, it also is strongly Evergreen.
  • Videos like “how to decorate your house for Halloween” and similar seasonal content is Evergreen while the (yearly repeating) Event is going on. This kind of content technically could still work for Subscribers primarily, but realistically it’s probably gonna be a optimized for New Viewers.

Using SEE–NTS for Content Programming

SEE-NTS can be used to assess a channel’s current standing to make decisions for future content programming.

Most obviously, if the vast majority of views a channel has come from subscribers and all formats on the channel are made for subscribers, that channel may want to develop a format which is meant to appeal to non-subscribers and draw them in.

If a creator feels like they’re grinding away in a hamster wheel, but can’t afford to take a day off because all their subscribers will lose interest, maybe Evergreen Subscriber content would be able to bridge these gaps in the future.

If a musician can only realistically make one big Event/Evergreen-type video a year and struggles to re-activate subscribers in-between uploads, them making Subscriber/Serial/Timely content in-between to fill the gaps and keep people engaged throughout the year may be useful.

Of course, as always: It’s hard to recommend any specifics without knowing the actual channel. I hope however it can help creators, at a glance, find out where they are with their current programming, and where they have potential left to explore.


SEE-NTS as a model doesn’t predict how successful content is going to be, it only can predict the rough shape of the view curve. The real world (and “The Algorithm”) of course can always throw a spanner in the works by having your viewers receive the video differently than what you designed it for.

Unlike Hero-Hub-Help, SEE-NTS doesn’t do content recommendations. For example, it’s not entirely clear to me what Subscriber/Event/Evergreen content would even look like, while for Help content, the hint already is in the name, and thus are the strategies you should take (ie SEO on your customer’s troubles).

SEE-NTS is untested as a tool for content programming. The questions that need to be answered in the future are:

  • Is SEE-NTS useful to accurately describe different channel programming strategies?
  • Is SEE-NTS complete, or are there more factors which are essential for programming?
  • Do creators who use SEE-NTS understand their programming better than those who don’t?
  • Is SEE-NTS useful to find gaps in the content programming?

Overall thoughts

From what I can tell so far, the SEE-NTS model seems promising. Even if it fails as a “practical” tool that can tell creators “do this”, it may still be a worthwhile academical tool as it categorizes content way better than Hero–Hub–Help.

Of course, I’d love even more for it to be useful as a practical tool. I guess time will tell how good this thing is.

Gnome 3. A review.


You may know Gnome as the “ah, something simple, which… — wait, where are my desktop icons and task bar?” desktop environment. Which, no doubt, it is; it’s what I liked about it when I first started using it in version 3.8 all those years ago. But recently, I discovered that it hadn’t just been that, but that it actively helps making things more seamless.

Let’s back up for a bit.

My theory on Desktop Environments is, in a nutshell, “if you notice them, they do something wrong”, or in other words: “A good desktop environment lets you focus on your tasks without getting in your way”. This basically also is true for programs in general, if it lets you do the thing you want to do easily and in one flow, it probably is a good program.

This effectively explains why Windows 10 keeps greatly displeasing me every time I use it. It’s design changes between the most recent Fluent Design, all the way down to Windows 2000/XP-style depending on which program you use (even built-in settings programs), and things like dark theme pretty much don’t work on anything at all. And even simple settings changes like adjusting the mic gain, require you to either dive into almost-invisible text-links in the settings app, or finding the right pop-up window of the old system control center. And after each somewhat major update, Cortana and Edge greet you yet again. To add to that, there’s my personal clumsiness which causes me to click on the wrong icon in the taskbar not quite daily, but enough that I now have “padding apps” between apps which take very long to load, so that a misclick doesn’t cause years of waiting. All of these things take me out of “the zone” whenever you encounter them, and I very frequently do.

Gnome’s quick access to settings

Gnome beats this any day of the week. Changing the mic gain can be done right in next to where you know the volume slider is, if the mic is active. And since loads of apps are GTK-based anyway, the dark theme (or any theme, really) gets applied pretty much universally, with the notable exceptions of the major browsers and blender – all of which have their own, very capable theming options though anyway – and Qt-based apps.

Encountering a Qt-based app in Gnome is weird every time, but likewise, encountering a GTK-based app in KDE is weird as well. And while there is the minor problem of them looking kinda weird compared to the rest of the system, there is the slightly more major issue that Qt-apps tend to use different things for everything. For example, if I want to open a file in a GTK-based program, it gives me effectively Nautilus (aka Gnome Files), whereas Qt-based programs give me Dolphin. But look closely at the difference towards the folders on the left-hand side:

Top: Nautilus-based “open file” modal (here: for Discord), Bottom: Dolphin-based “open file” modal (here: for kdenlive).

Where Nautilus has shortcuts for the images/documents/music/videos folders, Dolphin instead has basically the same, just slightly-different looking icons for a completely different function: Clicking on them filters the current folder for the type of file you’re looking for. And don’t get me wrong, it’s a very useful option, and on KDE, this Dolphin-modal does have the same shortcuts to drives and places, it’s just that this particular Qt-to-GTK-port is kinda confusing because it breaks the “there are shortcuts to your folders to the left” model that is established everywhere else by putting a search filter there instead.

But this is a small price to pay for what is my favourite part of Gnome: The Activity Overview.

The Activity Overview combines so many things into one place, it’s just awesome. Dead center, you have all your open windows. Not as window previews forced to the same size or just a bunch of icons as you may know from alt+tabbing or taskbars, but as actual windows which do a very decent job at conveying which windows are big and which aren’t. If you do need a taskbar, you can find it here as well, and if you need something which resembles OSX’ Launchpad and Spotlight search, they are here as well. In this view you can close windows you no longer need, or drag them to other screens, both real ones and virtual ones.

Opening the Activity Overview is as easy as pressing Super (the “Windows” key), or flinging your cursor into the top-left corner. It feels so good to use and I use it so often that it’s become my second nature: whenever I’m using a desktop environment which doesn’t have that, I’m actually starting to struggle a bit, to the point where I put the taskbar up top in Windows and KDE, so that flinging my cursor up left at least brings it in the right vicinity of the “Start” button.

Until recently, my review of Gnome would’ve stopped about there. The activity overview is awesome, and the rest is out of the way and (mostly) consistent, therefore, it’s a good desktop environment for me and I will continue using it whenever possible.

But, as I alluded to in the beginning, it’s taking steps towards making things more seamless.

Gnome’s Noficiation center

As a small example, the notification center shows notifications (duh) and your calendar, but also give you player controls for the YouTube tab that currently is playing. So you can pause and skip videos playing in the background at any time without having to find the right browser tab.

The bigger example is Gnome Online Accounts. Which isn’t actually that new, but I didn’t bother trying it beforehand. Because, what I associate with “connect your account” is that it just grabs your email and avatar for account creation purposes, and maybe starts posting farmville status updates to your timeline if you aren’t careful. But that isn’t what’s happening here. If all you have is a Google account and put it into Gnome Online Accounts, it automatically…

  • sets up your Email account in Geary and Evolution,
  • syncs your Google calendar with Gnome Calendar and Evolution,
  • imports your contacts in Gnome Contacts and Evolution,
  • adds a remote server connection to Google Drive in Gnome Files,
  • adds Google Documents to view in Gnome Documents,
  • imports photos from Google Photos to Gnome Photos,
  • does possibly more! I haven’t discovered all of the integrations yet.

Now, this sounds exactly like what Android does with the Google account, OSX with the AppleID/iCloud and Microsoft with the Microsoft Account, and to some degree, it is. The difference is however that it doesn’t try to get you into it’s ecosystem at which point it can extract money out of you for more storage space or whatever, but that it rather lets you keep your existing accounts and allows you to work with them faster. For example by letting you move stuff from and to your favourite cloud provider without having to open a browser, downloading it, finding it in the downloads folder and then moving it about.

Of course, we are still in FOSS-Land, so some of these integrations are kinda janky – I notice for example that the Gnome Files/Google Drive integration refuses to go much faster than 90 kiB/s despite me sitting on a 25 Mbit/s line – and some of the Gnome-specific apps aren’t quite as stable as the old guard – Geary sometimes refuses to connect to accounts until a system restart happens and sometimes insists that I’m working offline even though I’ve done nothing but watch YouTube videos for the past 3 hours.

And this shows the one gripe I do have with what Gnome’s UX decision imperative to keep things simple: The Geary team won’t build in a way manually reload. It instead shows you a banner saying “You are now working offline”, which you can dismiss, and that’s it. Which is immensely frustrating, because if you as a user are encountering an error which isn’t your fault, are you really supposed to… just wait until the program eventually decides to fix itself? Or did it fix itself and I am online again, but the banner didn’t remove itself afterwards? There’s no indication for when the next refresh happens either, because the only setting in Geary for updates is “automatically check for new mail”, which is either on or off, so when I see the banner, do I just click X and wait around for… ten minutes? Is that even enough? That’s not what I do! Monkey no patience! Monkey do thing! MONKEY SMASH BUTTONS!

… I’m beginning to wonder if Windows’ automatic “error fixing” thing actually would be a good feature for Gnome, because even if it doesn’t do anything, it at least lets you play around with a thing until it fixes itself…

Mockup: An “you’re offline” banner that’s actually useful and lets me retry manually.

So yeah. Gnome. Very awesome almost always, but can be kinda frustrating when it doesn’t work. Highly recommended, 5/5 toes. Get it on

Observations of the VTuber scene

Moin. This thing is mostly observations of the VTuber scene a few weeks in. I end up making some content recommendations in it, so it might be useful to long-time VTubers as well, though it shouldn’t be understood as “this is how you should do something”, but rather a “this is how I see it being done currently”.

The obvious

Starting with the obvious: As a VTuber, your body can look like however you want, but your movements and expressions typically are fairly restricted. Even if you are 3D and have roomscale tracking, you still can’t really interact with objects or other people in a convincing way. At least not now, and not in real time.

That said, even with these limitations, being a VTuber just gives you a lot of benefits that you wouldn’t get as a regular person:

  • Full privacy. Which you’d also get doing Podcasts, radio or voiceover-stuff in general, but all of which would lack…
  • Facial expressions. Just having head bops and wiggles and a mouth that can change between an eternal smile and a 😀 when talking is enough of a fixpoint for me that I can actually watch a just-talking-stream of a VTuber without feeling the need to do something else. (For comparison, I cannot listen to podcasts on a couch, as my eyes start to wander off fairly quickly. Which then leads me to doing something else and abandonning the podcast altogether, more often than not.) Now, you also get that just talking to a camera, but then you’d be giving up your privacy.
  • A more-interesting-than-average brand, without doing anything. Even as the most generic anime girl, you’re still way, way more recognizable than a generic gaming channel that has some 3D-dubstep intro as its only “branding” element.


Umbrella brands are surprisingly powerful. You can see this most clearly with Hololive and Nijisanji IMHO:

The Hololive brand is super strong. Every new member gets to start out with thousands or tens of thousands of subs, simply because it says “hololive” next to it. And that already sets expectations: It’s going to be a woman, the woman is going to be an idol, and there in general won’t be any unbearable technical issues.

Nijisanji in contrast doesn’t have these expectations as strongly, although their members also start with at least a few thousand subscribers. That is partially because there’s just so many more members, partially because new members could be anything, man or woman, quality ranging from good to “average new YouTuber”, technical ability ranging from good to permanently clipping audio. That said, Nijisanji is offering quite a valuable service (VTuber avatars and support) to quite a lot more people. And this non-exclusivity gives the company quite a bonus in my book.

Update: It has been pointed out to me by various people that I completely misunderstand Nijisanji and the impact they’ve had, and that Nijisanji ID’s technical troubles are more a problem of Indonesia not having that good of an infrastructure. The problem is, these technical issues, though not their fault, are translating into what image I’m seeing of them, and all the awesome stuff they did in the past is invisible to me, unless I really start digging. To be clear, this is an issue of the brand, not an issue of the individual creators among them. And even though the different regional branches are more or less independent from each other, the overall brand still is Nijisanji Region (apart from China), not some wildly different naming like you get with Mars, Twix and Snickers (which all belong to Mars).

These umbrella brands are fairly rare on YouTube these days. For me, only Machinima comes to mind. Like, even the EDUtube empires of the Green brothers or Brady Haran don’t have an umbrella brand. Instead, they have SciShow and CrashCourse with direct sister channels, but keep those brands fairly separate.


Formats really matter. Most VTubers are doing game streams and talk streams. Those who do game streams tend to get discovered better, while people who do talk streams tend to get loads more super chats. For example, Flare manages to out-rank Aqua in super chat revenue, despite having less than half her subscriber count.

Doing unique formats which are more than just the generic talk/game streams also seems to be an advantage:

  • Coco grew insanely fast with her Asacoco news show,
  • 3D shows (especially 3D debuts) perform super well,
  • non-standard game streams like speedruns/races work quite well, and
  • non-standard talk streams (interviews, fairy counselling, etc) work as well.

This is true across all of YouTube, btw: Having a unique format at least gives you a chance at standing out, and even though you run the risk of having a format which just doesn’t resonate with viewers, you at least are looking for doors with each format you try instead of bashing your head against the wall with generic gameplay in the hopes of breaking through eventually

Highlights and clips are super important, especially for the Japanese scene. I don’t think Fubuki would be where she is now without her viral meme videos, I don’t think any of them would have anywhere near that large of an international audience if it wasn’t for the translators and the translators only translating highlights, rather than whole streams.

I do think that VTubers (and streamers in general) should try hiring fans to make highlight videos and upload them to their own channel, so that their channels become more accessible for those living outside of the normal streaming timezones. Nijisanji in particular has been getting better at that recently, on their company channels at least.


Ultra-low-latency with DVR disabled is everywhere. I don’t think this is benefiting any channel that gets more than 100 concurrent viewers or so, because at those sizes, the chat starts being more delayed than the stream itself. This is because YouTube polls chat at set intervals for new messages instead of sending out each message on its own, and those poll intervals get rarer with more messages being sent.

Also, it makes it rather difficult to watch the stream on slightly subpar connections, or just if you’re half a planet away. This is because any rebuffer that sets back the latency to >5s will cause another rebuffer and skip ahead, resulting in large parts of the stream just being constantly buffering. Really as soon as you’ve got a few viewers, Low Latency is the way to go, with normal latency being great for anything which doesn’t have any meaningful chat interaction built in (eg singing streams).

Sexuality is quite a thing. It probably is easier to be that sexual in public if your real face isn’t attached to it, and it’s quite surprising how far you can get with that on YouTube without even being demonetized. On top of that, it tends to generate quite entertaining content by default. That said, I think the process of sexualising others is more problematic among the VTuber scene than other communities on YouTube, whether that is fans commenting on it on every occasion, bosses putting their talent into swimsuits, or character designs having tits so large that you’re running out of alphabet to describe them. I hope for the women involved that the disconnect between their character and the real person can helps with this.

VTubers in general seem to do disproportionately much live content, with the notable exceptions being Kizuna AI and Ami Yamato. I think there’s a lot of potential for non-live content which strictly works with motion capturing (as opposed to hand-animation). It doesn’t need to be the current livestreaming VTubers doing that either; in fact, most of the VOD content I see from the current live-VTubers is somewhat similar to the early 2006-level YouTube nonsense. There really is a lot of different directions to explore here. Putting it out there right now: I want to see a VTuber with a degree in Astronomy teach me about Supernovae.

VTubers being mostly Japan-based obviously results in a lot of Japanese content. The search interest in the USA in VTubers is growing quickly though, so any VTuber who can do English content is at an advantage here. Also, assuming that VTubers become popular in the US, you can bet that they’ll spread to the rest of the world as well, so it might be worth to start doing VTuber content in your local language, so that by the time it gets big, you’ll already be ready and at the forefront.

A lot of VTubers have been doing daily streams. And while that definitely isn’t bad, please, do yourself a favor and take days off, where you don’t spend a single thought on your channel. Daily content tends to be unsustainable, with even the largest YouTubers burning out with that after just a few years. More well-being advice can be found in the Creator Academy.

Overall, …

… I’ve been very impressed by how compelling the content various VTubers make have been to me. I’ve never watched more than 5 episodes of Anime I think (including Pokemon or the Simpsons), but the charme of a dog girl doing cute things while playing Doom, or a chubby devil trying to convince an art student that eyes don’t grow back just gets me. More recently, I’ve been hanging out with the Indonesian crowd, as their content is 75% English anyway, so I actually have a chance of getting the jokes.

In that sense, otsuu, I’m strapped in and ready for a wild ride.