The Work of Art in the Age of NFTs

Every time I see NFTs in the context of “making digital art unique/owning art”, I have to think of Walter Benjamin’s „The Work of Art in the Age of Mechanical Reproduction” from 1935. TL;DR is you just need to replace „mechanical“ with „digital” and you’re done.

The context for Benjamin’s essay is the rise of photography. Photography had existed for a long time before, but in the early 20th century, it had started to become prevalent everywhere. Photography certainly is an art form on its own, but it’s got one problem:

There isn’t really an „original” photograph you can look at in a museum. If you want to look at it, you first need to make a photoprint. But the process of making just one or 100 is the same. Are they all originals? All copies without an original?

For traditional art, it’s much easier: The original, authentic artwork exists in the „here and now“, in one location and only once. And it has a history (who prayed to it/owned it/how it was used/…). Benjamin calls this “Aura”.

This Aura is what makes you want to go to the Louvre to see the Mona Lisa from very far away. Benjamin compares this Aura to real life: Imagine sitting outside, mountains in the distance, leaves throwing shadows on your face and suddenly a squirrel hushes past you.

Now imagine the same thing in a movie or video game. The mountains are polygons, the squirrel no longer is chance, it’s scripted. Even the most perfect reproduction won’t have an Aura anymore. It may be immersive, but it won’t be authentic.

A manual copy of the Mona Lisa would simply be a fake, it had a much different history. But a mechanical/digital copy is different: It’s somewhat independent (you can crop/zoom to highlight parts), and it can access new places (ie your home).

When the original degrades (eg because I cut across it with a knife), it loses it’s authenticity and authority to the copy, eg a photograph: Suddenly you start looking at the copy and say “this is how this sadly destroyed artwork originally looked like”.

(This is where Benjamin has a very interesting detour to cult value vs exhibition value and how that’s shifted, I’ll skip it here.)

Anyway, NFTs. The big question is, can a proof of ownership restore the Aura of authenticity for digital art? The answer is a resounding “no”.

Just like a photograph, there never has been an “original”. Even if you’re the artist who saved the PSD half a second ago, you now have like 5 copies of it already: In your RAM, on your hard drive, in the CPU/GPU cache, on your screen, and if you do automatic backups, in a cloud.

The NFT’d artwork you buy won’t be in the “here and now”. It’s not with you, it’s somewhere on the internet, either as a classical URL or on IPFS. Copies, each of which as valid as yours, are sent to everyone who wants to see it.

Even if you somehow end up with no copies viewable to anyone else: Attach a second monitor to your computer and duplicate the display. Now you have two equally valid copies of it you can look at.

Owning a digital-art-NFT is very different to owning physical art. If anything, it’s as meaningful as getting copyright licenses, but even then, the TOS of NFT trading places give you rather crappy licenses.

FoundationApp for example forces creators to give up a “non-exclusive, world-wide, assignable, sublicensable, perpetual, and royalty-free license“

And if you buy it, you get a non-commercial „limited, worldwide, non-assignable, non-sublicensable, royalty-free license to display“

That’s right: If you buy an NFT from, you can’t even do with the thing as you please. They go on to allow you to share it to say “this is mine”, but you can’t use it in a monetized YouTube video or twitch stream. Buying NFTs is this useless.

Pro tip: If you want a digital artwork exclusive for you, commission an artist. With that, you get to be part of the creation of something truly new and support an artist both monetarily and in improving their skills, and you generally can use your commission however you want.

Die Digitale CD

Mama will ihrer Freundin eine CD schenken. Problem: Die gibt’s nur noch digital. Nichts einfacher als das, denk ich, wir laden die einfach runter und ziehen die auf einen USB-Stick.

*Edward A. Murphy lächelt müde*

Zum ersten Mal in meinem Leben ist das Problem nicht digitaler Natur. Der Download klappt und spuckt eine ZIP aus. Die Dateien sind einfach MP3s. Fantastisch.

Also noch fix zum Kvickly um quickly ‘n kleinen USB-stick zu fangen. Kvickly ist groß, sie haben bestimmt 300m² allein für Kleidung. Ich schlendere durch die Gänge, vorbei am Gemüse, an den Pfannen, an Lego Ma– LEGO MARIO?!?!


… und zum Elektronikregal. Eine Hälfte ist besetzt mit Glühbirnen. Ein Viertel ist voller Druckerpatronen. Doch dazwischen sind sie, USB-Mäuse, -Tastaturen, -Kabel, -Powerbanks, -Autoadapter, Handyhüllen und… das war’s. Gut, guck vielleicht ist an den Enden ja noch was.

Einmal zum Ende geschlendert, Batterien in allen Größen und Formen. Na, dann muss es wohl am anderen sein.

Nasenhaartrimmer und Ohrenschmalzschnecken.

Währenddessen ist Mama mit ihren Einkäufen fertig. Okay, hier wohl nicht. Wir gehen zur Kasse, und da! Mehr USB-Krams! UND EINE SPHINX MIT TITTEN!

Doch bei dem USB-Krams (Drahtlosladegeräte, Powerbanks für Radfahrer, Kabel ohne Ende) ist wieder kein USB-Stick dabei. Na dann. Mama, keiner sozialen Interaktion scheu, fragt noch mal die Verkäuferin. Sie sagt, “natürlich haben wir USB-Sticks!”, und geht schnurstraks auf das Regal zu, wo die ganzen Kabel waren.

“Meinst du nicht das?” – “ne, die sollen Daten halten können.” – “Mobile Daten?” – “Speicherplatz! 8GB oder so” – “Aah, ne, sowas ham wa nich”.


Nun denn. Wir sind ja im Borgen, dem großen Einkaufszentrum in Sønderborg. Hier gibt’s alles!

Ein Schreibwarengeschäft und Handyverkäufer später lässt mich diese Hoffnung korrigieren:

Hier gibt’s alles! Außer USB-Sticks!

Auf dem Weg nach draußen dämmert es mir so langsam, dass ich genau so gut nach Kassettenrohlingen hätte fragen können. Sind USB-Sticks etwa schon veraltete Technik?

Auf dem Weg nach Hause kommen wir an einem Computerreparaturshop vorbei. Und Hurra, er hat USB-Sticks! 128 GB für 500 DKK/67€?!! Die Dinger gibt’s auf Amazon für <20! und ich brauch nur genug für ‘ne CD, also höchstens 800MB.

Moment mal. CD?

Ich frag Papa. Er glaubt sich, dunkel zu erinnern, dass wir noch Rohlinge haben. Hurra! Und tatsächlich, Papa holt eine Spindel raus. Ich nehme mir eine, und… ach ja. Mein Laptop hat kein CD-Laufwerk mehr.

Gut, nehm ich einfach denen der Elter… doch möglicherweise hatte ich die letztes Jahr auch schon auf moderne Dinger upgraded. Papas Arbeits-PC? Hat das Ministerium auch upgraded. Alle haben sie kein CD-Laufwerk mehr.

Nach langer Suche findet sich endlich der alte Laptop. Frisch mit Windows 8 und CD-Laufwerk. Hurra! Ich lege den Rohling ein und starre auf den Bildschirm. Wie ging das noch gleich mit CDs brennen? Da war doch was mit Audio vs Daten-CD?

Tatsächlich geht’s mitm Windows-Explorer. Einfach Daten rüberziehen, als wär’s ‘n USB-Stick, dann Finalisieren, Audio-CD wählen und WARUM HAT SICH DER WINDOWS MEDIA PLAYER GEÖFFNET? Also noch mal auf Brennen drücken und… Das war’s?

Das Ding brummt ein bisschen rum und spuckt die CD wieder aus. Gleich mal gucken ob’s funktioniert. Im Windows Media Player tut’s es auf jeden Fall.

Und in einem richtigen CD-Spieler? Ich versuche, die Anlage anzuschalten…

…und der Schalter bricht ab.

Ich glaub, wir verschicken die CD einfach as-is und sagen “wenn’s nich geht, schicken wir dir die ZIP per Email”

ZIP is not a good measure of lyrical complexity

The following paper recently came to my attention:

Varnum MEW, Krems JA, Morris C, Wormley A, Grossmann I (2021) Why are song lyrics becoming simpler? a time series analysis of lyrical complexity in six decades of American popular music. PLoS ONE 16(1): e0244576. doi:10.1371/journal.pone.0244576

It attempts to analyze lyrical complexity of top 100 songs and correlate it to their success, socio-economical factors, and so on. I am not really qualified to talk about most of the work they are doing (they all are from psychology departments and talk about what probably are psychology things), but as an ex-computer science student, current multimedia production student and a hobbyist writer, I do feel qualified to talk about this line in their methodology specifically:

Compressibility indexes the degree to which song’s lyrics have more repetitive and less information dense, and thus simpler, content. We used a variant of the established LZ77 compression algorithm.

LZ77 is an ancient compression algorithm from 1977 (hence the name). It’s the granddad of the modern deflate algorithm used to compress webpages, PNGs, ZIPs, PDFs, ODTs, DOCXs, and so on. The authors correctly identify:

We used the LZ77 compression algorithm because of its intimate connection to textual repetition. Most of the byte savings when compressing song lyrics arise from large, multi-line sections (most importantly the chorus, and chorus-like hooks).

The words “byte savings” already is hinting at what the problem here might be. Because, yes, if your lyrics repeat the same thing over and over again (and to be fair, pop songs often do), and if you ZIP it up, it will take up less space on your disk and yes, in information theory, the song would be less complex.

But we as listeners aren’t really interested in information theory and degrees of compression. If anything, we might be interested in whether the lyrics go for a very simple rhyme, or a combo that’s been heard hundreds of times before (house → mouse, fire → desire, heart → apart, etc – rhymezone is very useful to find common pairings), or for one you don’t see coming (eg. Madvillain’s Meatgrinder “trouble with the script → subtle lisp midget”). The ZIP algorithm won’t be able to tell the complexity of the rhymes apart, it only can judge whether or not words or phrases are literally repeating.

And even that isn’t necessarily a good metric to judge complexity. Take the lyrics of Rammstein’s Du hast for example:

Du hast
Du hast mich
Du hast mich
Du hast mich gefragt
Du hast mich gefragt
Du hast mich gefragt und ich hab’ nichts gesagt

This is some ZIP-tastic lyrics and proof that these lyrics are simple – except they aren’t. This is a wordplay on “du hast” (you have) and “du hasst” (you hate). If you hear these lyrics, you’re constantly trying to decypher which of the two meanings this hast/hasst they’re talking about, and the four (!) “Du / du hast / du hast mich” repititions before the song even gets to the verse quoted above make it a very cognitively engaging, and, dare I say, complex song up to that point, just by repeating an unclear phrase.

So, we have established that any conclusions drawn from ZIP-ping up song lyrics are shaky at best, I have another question:

Why, why, why a ZIP algorithm?

It is beyond me why the first thing you’d head to when tasked “measure whether new songs are simpler” is LZ77, or any kind of compression algorithm. Compression algorithms will look at substrings, so h[ouse] and m[ouse] would be better to compress as a pair than ho[use] and ca[use], because the repeated substring is longer. But house, mouse, cause are all just 5-letter-words which (vaguely) rhyme, so there’s no reason to count one pairing more or less complex than the other.

And it’s not like there aren’t metrics which are designed to look at this problem in particular: Lexical Diversity Indices exist, here’s a paper describing all their differences, doi:10.3758/BRM.42.2.381. And even that paper admits:

In sum, all textual analyses are fraught with difficulty and disagreement, and LD is no exception. There is no agreement in the field as to the form of processing (sequential or nonsequential) or the composition of lexical terms (e.g., words, lemmas, bigrams, etc.) […] In this study, we do not attempt to remedy these issues. Instead, we argue that the field is sufficiently young to be still in need of exploring its potential to inform substantially.

So even when analyzing with an algorithm designed to measure lexical diversity, it still would run into trouble, especially when being ran in the “full auto” mode that is necessary to classify tens of thousands of texts.

The research already has been done

Varnum et al. fail to acknowledge the research of Isaac Piraino, published at least a year prior to theirs. Piraino took 450k song lyrics (as opposed to Varnum et al.’s 15k), filtered to only include lyrics above 100 words (because short lyrics necessarily are more diverse; you first need to write a word before you can repeat it), and measured them with MTLD (a metric actually designed to measure lexical diversity).

Piraino’s findings: MTLD peak in the 2000’s
Varnum et al.’s findings: Steadily rising compressability.

Piraino hypothesizes:

My theory is that the gradual decrease in the popularity of rock music and increase in popularity of hip-hop explains the upward trend to the end of the 90s. Rock music, although complex in different ways, usually has a more simple vocuabulary than its lyrically dense hip-hop counterpart. My theory for why it went back down after the 90s is that hip-hop has slowly been transforming into pop music in combination with the rise in popularity of EDM. […] EDM typically has a handful of catchphrases that are repeated over and over again.


Varnum et al. acknowledge that “Songs might be complex or simple in other ways as well, in terms of rhythm, melody, number of instruments played, and so on.” But since their methodology is so shaky, and their results seem to contradict other research, I’d be very, very careful to even try to draw any conclusions from this. Or really, most things which try to algorithm away at huge datasets and then try to explain the most intricate and inter-connected thing humanity has to offer, culture, with it. Overall, it reminds me of the “timbre paper” floating around, which tries to measure musical quality by how much timbre it has (and got torn apart over it):

The EDE model: Exploring, Developing and Established Creators


A while back, I posted a thing about “Why Grinding is bad for you” on r/youtubegaming, where I encouraged gaming creators to try different formats, instead of going for the first thing which comes to mind, which quite often is just a Let’s Play. To aid this discussion, I developed the EDE-model, which I wanted to expand on here.

The basic gist of the EDE model is that creators who just are starting out have much more freedoms on what they can do than big channels.

1. The Exploration Stage

At this first stage, a creator just made a channel with the intend to upload something, starting from 0 subscribers, 0 views and 0 videos, or something very close to this. This crucially means the following:

  • Nobody has any expectations on what this channel is going to upload. Because of this, the creator has the tough fate of complete creative freedom where they can do anything.
  • Typical channel recommendations (“upload on a schedule! stick to formats! consistency is king!”) aren’t really applicable yet, because they’re strategies which optimize for existing subscribers and thus require some degree of following to be effective.

My advice for creators at this stage would be to try anything that’s vaguely interesting to them. To not get started doing regular formats and series just yet, but just try everything they always wanted to try. To create as if view counts and subscriber counts don’t exist.

This freedom is not something which you really get later on in the process, at least not without alienating vast portions of your audience.

2. The Development Stage

At this stage, the creator probably has made a few dozen videos (depending on the type of content and effort which went into each individual video), and figured out which kinds of content they want to do more of, as well as which kinds of content they don’t like doing. With the experience they’ve gathered in the Exploration stage, they probably also have considerably better video making skills and equipment than in the very beginning, and possibly already have gotten feedback from friends and family on which videos were nice to watch and which ones didn’t work out as intended.

Based on this, the creator now can start transitioning towards doing what established channels do, namely:

  • Find a niche to be in
  • Develop formats and serial content which can be uploaded on a regular schedule
  • Start putting more care into marketing, ie SEO and good thumbnails/titles

If a developing creator and finds their initial niche to be a dead end for whatever reason – too much effort per video, copyright trouble, getting bored of it – it’s completely fine to go back to exploring other options. This is where it comes in handy to have had this exploration stage beforehand, so they already know what they’d also want to do and come up with a somewhat thought out plan on how to transition between the niches.

But, if you’ve found your idea to be sustainable and fun, you can continue on your path and eventually reach…

3. The Established Stage

At this stage, the creator has probably made hundreds of videos, and is decently well known in their niche. This also is the stage where fans start to become a significant force, be it for promotion, merch sales or patreon stuff. Micro-optimizations can become surprisingly powerful here.

Since their channel probably generates some decent amount of money one way or the other, the creator can invest into the channel much more, be it through buying better equipment, dedicating more time to the channel that they otherwise would be working on a “real” job, or getting opportunities which smaller YouTubers just don’t get. Note though that the money doesn’t come on its own, but drags a whole tail of bureaucracy behind it.

The niche they live in is pretty set in stone and difficult to escape from without losing a lot of attention from subscribers. That said, it sometimes can be very necessary to pivot even as an established creator, eg. if the niche they’re in is very small and/or shrinking, causing the channel to stagnate. Further, because the fans and subscribers have very strong expectations of the channel, it can become increasingly difficult to meet these expectations.

Which isn’t to say that an established creator has a worse fate than someone in one of the other stages; there’s a reason why all the bigger YouTubers can be found in this category. It’s just that it comes with a different set of challenges than a small one, so it’s not like the moment you become established, all your trouble will go away.

Why this model can be useful

Often, creators who start out have a fairly concrete idea of what they want to do, so they skip the exploration stage and then go straight for the development stage. And while this may work, it often times leads to this “small YouTuber mentality”, in which the creator “grinds” out videos day after day or week after week, without getting anywhere, and the advice from peers being “just keep at it, do these micro-optimizations and hope that the algorithm picks you up eventually”.

The problem I have with this mentality is that it reduces something which can be very much fulfilling – video production and the creative process in general – into a 9-5 kinda job in which the modus operandi is “preservere against the odds”, and this job doesn’t even pay well.

My hope is that this model encourages people to pursue extreme levels of creativity at first, and once they know where their creative preferences lie, start making a channel geared towards success.

The SEE–NTS Model. A better model for Online Video Programming.


The Hero–Hub–Help model which YouTube developed in 2014 has been a helpful tool for video marketeers to help them understand what they can do on YouTube. Namely:

  • Hero content is big events, which you can advertise in a big way. It gets huge attention on the day it’s happening, and then quickly becomes uninteresting again, such as the E3 presentations.
  • Hub content is regularly scheduled content, to keep subscribers (and viewers you’ve reached through the other content) interested in your channel. This content gets watched by your subscribers in the first couple days after upload, and then basically never again.
  • Help content (originally named: hygiene) is helpful content teaching users how to do stuff, ie tutorials. This content gets found at any time via search, but doesn’t add much value to subscribers to your channel.

Now, this model kinda makes sense if you have a product you’re making videos about. But it kinda breaks down once you put it into the context of a normal YouTuber: It doesn’t make sense to make a big event which only is relevant for a week, so Hero content is out. Hub content is more in line to what YouTubers do, but YouTubers do so much more than make videos which just are consistent and appeal to their current subscribers.

So, out of this model, only a few bits actually are usable for YouTubers, and even these only are so with caveats. So I thought about it a bit and came up with a new model instead:

The SEE–NTS Model

SEE-NTS is short for the following aspects:

The SEE-NTS model can be thought of as 3-dimensional space.
Also, I like to pronounce it as “sea ants”.
  • Subscriber Content. Ie content made primarily for subscribers, featuring funny in-jokes, references to previous videos, stories that make the creator more relatable to their fans and such.
  • Evergreen Content. Ie content which will stay relevant to the world for the (forseeable) future.
  • Event Content. Ie content which is tied to certain events.

— with their counterparts —

  • New Viewer Content. Ie content which is accessible and fully understandable to someone who never has seen any of your content before.
  • Timely Content. Ie content which is relevant during a specific window of time only, and then basically never again, eg news.
  • Serial Content. Ie content which you can sure you’ll see more of next week anyway.

The individual aspects make predictions on whether the view distribution will be flat over time, or have a spike shortly after publication:

  • Subscriber content is watched by subscribers, so it’ll get most of it’s views within the first week of publication, while New Viewer content may get discovered by potential new subscribers at any time.
  • Timely content is only relevant shortly after publication, after which it’s old news. Evergreen content is ever relevant.
  • Event content is most watched during the event (→ Tentpoling), while Serial content is watched all year round.

As such, the model explains why Hero-Hub-Help makes the predictions that it does: Hero content is minmaxed for spikeyness (Event/Timely, with a lot of advertising thrown at it so that talking about the Subscriber/New Viewer axis kinda is pointless), Hub content is Subscribers/Serial content (and doesn’t nearly spike as high), and Help content is minmaxed for flatness.

SEE-NTS also allows for other content to be categorized sensibly:

  • Mr Beast’s content is no doubt Serial (it’s not really a surprise what he’ll do next), but features some Event-like qualities (he basically makes his own events in each video by giving away a lot of money). His videos are accessible for New Viewers, yet appeal for Subscribers as well. And the stunts he pulls generally age well, so: His content sits pretty much in the middle and manages to more or less cover all bases.
  • A band doing a concert live stream is an Event for everyone who already knows the band (ie Subscriber-ish), but since music doesn’t really get outdated, it also is strongly Evergreen.
  • Videos like “how to decorate your house for Halloween” and similar seasonal content is Evergreen while the (yearly repeating) Event is going on. This kind of content technically could still work for Subscribers primarily, but realistically it’s probably gonna be a optimized for New Viewers.

Using SEE–NTS for Content Programming

SEE-NTS can be used to assess a channel’s current standing to make decisions for future content programming.

Most obviously, if the vast majority of views a channel has come from subscribers and all formats on the channel are made for subscribers, that channel may want to develop a format which is meant to appeal to non-subscribers and draw them in.

If a creator feels like they’re grinding away in a hamster wheel, but can’t afford to take a day off because all their subscribers will lose interest, maybe Evergreen Subscriber content would be able to bridge these gaps in the future.

If a musician can only realistically make one big Event/Evergreen-type video a year and struggles to re-activate subscribers in-between uploads, them making Subscriber/Serial/Timely content in-between to fill the gaps and keep people engaged throughout the year may be useful.

Of course, as always: It’s hard to recommend any specifics without knowing the actual channel. I hope however it can help creators, at a glance, find out where they are with their current programming, and where they have potential left to explore.


SEE-NTS as a model doesn’t predict how successful content is going to be, it only can predict the rough shape of the view curve. The real world (and “The Algorithm”) of course can always throw a spanner in the works by having your viewers receive the video differently than what you designed it for.

Unlike Hero-Hub-Help, SEE-NTS doesn’t do content recommendations. For example, it’s not entirely clear to me what Subscriber/Event/Evergreen content would even look like, while for Help content, the hint already is in the name, and thus are the strategies you should take (ie SEO on your customer’s troubles).

SEE-NTS is untested as a tool for content programming. The questions that need to be answered in the future are:

  • Is SEE-NTS useful to accurately describe different channel programming strategies?
  • Is SEE-NTS complete, or are there more factors which are essential for programming?
  • Do creators who use SEE-NTS understand their programming better than those who don’t?
  • Is SEE-NTS useful to find gaps in the content programming?

Overall thoughts

From what I can tell so far, the SEE-NTS model seems promising. Even if it fails as a “practical” tool that can tell creators “do this”, it may still be a worthwhile academical tool as it categorizes content way better than Hero–Hub–Help.

Of course, I’d love even more for it to be useful as a practical tool. I guess time will tell how good this thing is.

Gnome 3. A review.


You may know Gnome as the “ah, something simple, which… — wait, where are my desktop icons and task bar?” desktop environment. Which, no doubt, it is; it’s what I liked about it when I first started using it in version 3.8 all those years ago. But recently, I discovered that it hadn’t just been that, but that it actively helps making things more seamless.

Let’s back up for a bit.

My theory on Desktop Environments is, in a nutshell, “if you notice them, they do something wrong”, or in other words: “A good desktop environment lets you focus on your tasks without getting in your way”. This basically also is true for programs in general, if it lets you do the thing you want to do easily and in one flow, it probably is a good program.

This effectively explains why Windows 10 keeps greatly displeasing me every time I use it. It’s design changes between the most recent Fluent Design, all the way down to Windows 2000/XP-style depending on which program you use (even built-in settings programs), and things like dark theme pretty much don’t work on anything at all. And even simple settings changes like adjusting the mic gain, require you to either dive into almost-invisible text-links in the settings app, or finding the right pop-up window of the old system control center. And after each somewhat major update, Cortana and Edge greet you yet again. To add to that, there’s my personal clumsiness which causes me to click on the wrong icon in the taskbar not quite daily, but enough that I now have “padding apps” between apps which take very long to load, so that a misclick doesn’t cause years of waiting. All of these things take me out of “the zone” whenever you encounter them, and I very frequently do.

Gnome’s quick access to settings

Gnome beats this any day of the week. Changing the mic gain can be done right in next to where you know the volume slider is, if the mic is active. And since loads of apps are GTK-based anyway, the dark theme (or any theme, really) gets applied pretty much universally, with the notable exceptions of the major browsers and blender – all of which have their own, very capable theming options though anyway – and Qt-based apps.

Encountering a Qt-based app in Gnome is weird every time, but likewise, encountering a GTK-based app in KDE is weird as well. And while there is the minor problem of them looking kinda weird compared to the rest of the system, there is the slightly more major issue that Qt-apps tend to use different things for everything. For example, if I want to open a file in a GTK-based program, it gives me effectively Nautilus (aka Gnome Files), whereas Qt-based programs give me Dolphin. But look closely at the difference towards the folders on the left-hand side:

Top: Nautilus-based “open file” modal (here: for Discord), Bottom: Dolphin-based “open file” modal (here: for kdenlive).

Where Nautilus has shortcuts for the images/documents/music/videos folders, Dolphin instead has basically the same, just slightly-different looking icons for a completely different function: Clicking on them filters the current folder for the type of file you’re looking for. And don’t get me wrong, it’s a very useful option, and on KDE, this Dolphin-modal does have the same shortcuts to drives and places, it’s just that this particular Qt-to-GTK-port is kinda confusing because it breaks the “there are shortcuts to your folders to the left” model that is established everywhere else by putting a search filter there instead.

But this is a small price to pay for what is my favourite part of Gnome: The Activity Overview.

The Activity Overview combines so many things into one place, it’s just awesome. Dead center, you have all your open windows. Not as window previews forced to the same size or just a bunch of icons as you may know from alt+tabbing or taskbars, but as actual windows which do a very decent job at conveying which windows are big and which aren’t. If you do need a taskbar, you can find it here as well, and if you need something which resembles OSX’ Launchpad and Spotlight search, they are here as well. In this view you can close windows you no longer need, or drag them to other screens, both real ones and virtual ones.

Opening the Activity Overview is as easy as pressing Super (the “Windows” key), or flinging your cursor into the top-left corner. It feels so good to use and I use it so often that it’s become my second nature: whenever I’m using a desktop environment which doesn’t have that, I’m actually starting to struggle a bit, to the point where I put the taskbar up top in Windows and KDE, so that flinging my cursor up left at least brings it in the right vicinity of the “Start” button.

Until recently, my review of Gnome would’ve stopped about there. The activity overview is awesome, and the rest is out of the way and (mostly) consistent, therefore, it’s a good desktop environment for me and I will continue using it whenever possible.

But, as I alluded to in the beginning, it’s taking steps towards making things more seamless.

Gnome’s Noficiation center

As a small example, the notification center shows notifications (duh) and your calendar, but also give you player controls for the YouTube tab that currently is playing. So you can pause and skip videos playing in the background at any time without having to find the right browser tab.

The bigger example is Gnome Online Accounts. Which isn’t actually that new, but I didn’t bother trying it beforehand. Because, what I associate with “connect your account” is that it just grabs your email and avatar for account creation purposes, and maybe starts posting farmville status updates to your timeline if you aren’t careful. But that isn’t what’s happening here. If all you have is a Google account and put it into Gnome Online Accounts, it automatically…

  • sets up your Email account in Geary and Evolution,
  • syncs your Google calendar with Gnome Calendar and Evolution,
  • imports your contacts in Gnome Contacts and Evolution,
  • adds a remote server connection to Google Drive in Gnome Files,
  • adds Google Documents to view in Gnome Documents,
  • imports photos from Google Photos to Gnome Photos,
  • does possibly more! I haven’t discovered all of the integrations yet.

Now, this sounds exactly like what Android does with the Google account, OSX with the AppleID/iCloud and Microsoft with the Microsoft Account, and to some degree, it is. The difference is however that it doesn’t try to get you into it’s ecosystem at which point it can extract money out of you for more storage space or whatever, but that it rather lets you keep your existing accounts and allows you to work with them faster. For example by letting you move stuff from and to your favourite cloud provider without having to open a browser, downloading it, finding it in the downloads folder and then moving it about.

Of course, we are still in FOSS-Land, so some of these integrations are kinda janky – I notice for example that the Gnome Files/Google Drive integration refuses to go much faster than 90 kiB/s despite me sitting on a 25 Mbit/s line – and some of the Gnome-specific apps aren’t quite as stable as the old guard – Geary sometimes refuses to connect to accounts until a system restart happens and sometimes insists that I’m working offline even though I’ve done nothing but watch YouTube videos for the past 3 hours.

And this shows the one gripe I do have with what Gnome’s UX decision imperative to keep things simple: The Geary team won’t build in a way manually reload. It instead shows you a banner saying “You are now working offline”, which you can dismiss, and that’s it. Which is immensely frustrating, because if you as a user are encountering an error which isn’t your fault, are you really supposed to… just wait until the program eventually decides to fix itself? Or did it fix itself and I am online again, but the banner didn’t remove itself afterwards? There’s no indication for when the next refresh happens either, because the only setting in Geary for updates is “automatically check for new mail”, which is either on or off, so when I see the banner, do I just click X and wait around for… ten minutes? Is that even enough? That’s not what I do! Monkey no patience! Monkey do thing! MONKEY SMASH BUTTONS!

… I’m beginning to wonder if Windows’ automatic “error fixing” thing actually would be a good feature for Gnome, because even if it doesn’t do anything, it at least lets you play around with a thing until it fixes itself…

Mockup: An “you’re offline” banner that’s actually useful and lets me retry manually.

So yeah. Gnome. Very awesome almost always, but can be kinda frustrating when it doesn’t work. Highly recommended, 5/5 toes. Get it on

Observations of the VTuber scene

Moin. This thing is mostly observations of the VTuber scene a few weeks in. I end up making some content recommendations in it, so it might be useful to long-time VTubers as well, though it shouldn’t be understood as “this is how you should do something”, but rather a “this is how I see it being done currently”.

The obvious

Starting with the obvious: As a VTuber, your body can look like however you want, but your movements and expressions typically are fairly restricted. Even if you are 3D and have roomscale tracking, you still can’t really interact with objects or other people in a convincing way. At least not now, and not in real time.

That said, even with these limitations, being a VTuber just gives you a lot of benefits that you wouldn’t get as a regular person:

  • Full privacy. Which you’d also get doing Podcasts, radio or voiceover-stuff in general, but all of which would lack…
  • Facial expressions. Just having head bops and wiggles and a mouth that can change between an eternal smile and a 😀 when talking is enough of a fixpoint for me that I can actually watch a just-talking-stream of a VTuber without feeling the need to do something else. (For comparison, I cannot listen to podcasts on a couch, as my eyes start to wander off fairly quickly. Which then leads me to doing something else and abandonning the podcast altogether, more often than not.) Now, you also get that just talking to a camera, but then you’d be giving up your privacy.
  • A more-interesting-than-average brand, without doing anything. Even as the most generic anime girl, you’re still way, way more recognizable than a generic gaming channel that has some 3D-dubstep intro as its only “branding” element.


Umbrella brands are surprisingly powerful. You can see this most clearly with Hololive and Nijisanji IMHO:

The Hololive brand is super strong. Every new member gets to start out with thousands or tens of thousands of subs, simply because it says “hololive” next to it. And that already sets expectations: It’s going to be a woman, the woman is going to be an idol, and there in general won’t be any unbearable technical issues.

Nijisanji in contrast doesn’t have these expectations as strongly, although their members also start with at least a few thousand subscribers. That is partially because there’s just so many more members, partially because new members could be anything, man or woman, quality ranging from good to “average new YouTuber”, technical ability ranging from good to permanently clipping audio. That said, Nijisanji is offering quite a valuable service (VTuber avatars and support) to quite a lot more people. And this non-exclusivity gives the company quite a bonus in my book.

Update: It has been pointed out to me by various people that I completely misunderstand Nijisanji and the impact they’ve had, and that Nijisanji ID’s technical troubles are more a problem of Indonesia not having that good of an infrastructure. The problem is, these technical issues, though not their fault, are translating into what image I’m seeing of them, and all the awesome stuff they did in the past is invisible to me, unless I really start digging. To be clear, this is an issue of the brand, not an issue of the individual creators among them. And even though the different regional branches are more or less independent from each other, the overall brand still is Nijisanji Region (apart from China), not some wildly different naming like you get with Mars, Twix and Snickers (which all belong to Mars).

These umbrella brands are fairly rare on YouTube these days. For me, only Machinima comes to mind. Like, even the EDUtube empires of the Green brothers or Brady Haran don’t have an umbrella brand. Instead, they have SciShow and CrashCourse with direct sister channels, but keep those brands fairly separate.


Formats really matter. Most VTubers are doing game streams and talk streams. Those who do game streams tend to get discovered better, while people who do talk streams tend to get loads more super chats. For example, Flare manages to out-rank Aqua in super chat revenue, despite having less than half her subscriber count.

Doing unique formats which are more than just the generic talk/game streams also seems to be an advantage:

  • Coco grew insanely fast with her Asacoco news show,
  • 3D shows (especially 3D debuts) perform super well,
  • non-standard game streams like speedruns/races work quite well, and
  • non-standard talk streams (interviews, fairy counselling, etc) work as well.

This is true across all of YouTube, btw: Having a unique format at least gives you a chance at standing out, and even though you run the risk of having a format which just doesn’t resonate with viewers, you at least are looking for doors with each format you try instead of bashing your head against the wall with generic gameplay in the hopes of breaking through eventually

Highlights and clips are super important, especially for the Japanese scene. I don’t think Fubuki would be where she is now without her viral meme videos, I don’t think any of them would have anywhere near that large of an international audience if it wasn’t for the translators and the translators only translating highlights, rather than whole streams.

I do think that VTubers (and streamers in general) should try hiring fans to make highlight videos and upload them to their own channel, so that their channels become more accessible for those living outside of the normal streaming timezones. Nijisanji in particular has been getting better at that recently, on their company channels at least.


Ultra-low-latency with DVR disabled is everywhere. I don’t think this is benefiting any channel that gets more than 100 concurrent viewers or so, because at those sizes, the chat starts being more delayed than the stream itself. This is because YouTube polls chat at set intervals for new messages instead of sending out each message on its own, and those poll intervals get rarer with more messages being sent.

Also, it makes it rather difficult to watch the stream on slightly subpar connections, or just if you’re half a planet away. This is because any rebuffer that sets back the latency to >5s will cause another rebuffer and skip ahead, resulting in large parts of the stream just being constantly buffering. Really as soon as you’ve got a few viewers, Low Latency is the way to go, with normal latency being great for anything which doesn’t have any meaningful chat interaction built in (eg singing streams).

Sexuality is quite a thing. It probably is easier to be that sexual in public if your real face isn’t attached to it, and it’s quite surprising how far you can get with that on YouTube without even being demonetized. On top of that, it tends to generate quite entertaining content by default. That said, I think the process of sexualising others is more problematic among the VTuber scene than other communities on YouTube, whether that is fans commenting on it on every occasion, bosses putting their talent into swimsuits, or character designs having tits so large that you’re running out of alphabet to describe them. I hope for the women involved that the disconnect between their character and the real person can helps with this.

VTubers in general seem to do disproportionately much live content, with the notable exceptions being Kizuna AI and Ami Yamato. I think there’s a lot of potential for non-live content which strictly works with motion capturing (as opposed to hand-animation). It doesn’t need to be the current livestreaming VTubers doing that either; in fact, most of the VOD content I see from the current live-VTubers is somewhat similar to the early 2006-level YouTube nonsense. There really is a lot of different directions to explore here. Putting it out there right now: I want to see a VTuber with a degree in Astronomy teach me about Supernovae.

VTubers being mostly Japan-based obviously results in a lot of Japanese content. The search interest in the USA in VTubers is growing quickly though, so any VTuber who can do English content is at an advantage here. Also, assuming that VTubers become popular in the US, you can bet that they’ll spread to the rest of the world as well, so it might be worth to start doing VTuber content in your local language, so that by the time it gets big, you’ll already be ready and at the forefront.

A lot of VTubers have been doing daily streams. And while that definitely isn’t bad, please, do yourself a favor and take days off, where you don’t spend a single thought on your channel. Daily content tends to be unsustainable, with even the largest YouTubers burning out with that after just a few years. More well-being advice can be found in the Creator Academy.

Overall, …

… I’ve been very impressed by how compelling the content various VTubers make have been to me. I’ve never watched more than 5 episodes of Anime I think (including Pokemon or the Simpsons), but the charme of a dog girl doing cute things while playing Doom, or a chubby devil trying to convince an art student that eyes don’t grow back just gets me. More recently, I’ve been hanging out with the Indonesian crowd, as their content is 75% English anyway, so I actually have a chance of getting the jokes.

In that sense, otsuu, I’m strapped in and ready for a wild ride.

11. A Reflection of Beyond the Edge of the World


I hope you had fun reading this short story. However, I’m interested in improving, so I’ve collected some criticism both from me and from others.

Alice says:

1. The protagonist is rather forgettable

I think I agree with this. I originally wanted to make them genderless (you know, “I” can be anyone! even you!), but by doing so also made them somewhat characterless. And with the plot going on to let the protagonist find a girlfriend, it’s very probable we’re dealing with a male protagonist anyways, so I completely undermined the first idea anyway

2. My world building is rather weak

I think this is partially because English still is a foreign language to me, so I don’t know the best words. Maybe I should ask Trump if I can buy some of his.
Further, I spend too little time on it. For example:

I had forged my fair share of custom tools in the factory, from the smallest springs to the biggest wrench, but I always had access to the never-ending power of steam. Ralph had his right arm. Well, he also had his left arm, a hammer and an anvil, but all in all, if he wanted a piece of metal to be flat, he couldn’t just plop it into a steam hammer and wait for a couple seconds or minutes, he had to work it flat, by hand, and re-heat it often. (Chapter 5)

This part is meant to convey to the reader that in Valand, industrialization is going on while in Greenland, it’s still all muscle power. (I do think the joke in it worked. I’ve written it so long ago, it completely caught me off guard this morning while I was reading through it again)

To make things worse, I did say in the beginning that industrialization is restricted in Valand , so we don’t even know if blacksmiths up there are using hand power usually. Overall, I think my approach this time (write first, think later) hindered me worldbuilding properly. On the other hand, it did allow me to write the story remarkably quickly. I could’ve fixed this in post, writing beautiful and consistent descriptions after the story was done, but I kinda just wanted to get it out.

3) The logo looks like smash bros, and way too clean.

Firstly, it looks like my brand identity, thank you very much, and secondly, I think both me and Smash bros ultimate try the same thing here, showing sunrise from the ISS. In my case, it symbolizes the edge of the world, in SSBU, it symbolizes the world as a whole.

Sunrise as seen from the ISS. Image: NASA/ESA

That said, my Logo thingy was thrown together in 15 minutes. Had I wanted to execute my other idea, a view from Valand over the lower lands, I’d have to spend quite some time in Blender making it work. Time which I didn’t have this time around, because there was a deadline. So, have the logo thingy one last time:

Logo: Beyond the Edge of the World

4. I’m jumping around somewhat and not bringing ideas to their end.

In particular, in chapter 5 there’s a bit where the protagonist is fixing the machine, and the smith is pleased with the progress. Alice says it’s confusing the smith would say that when they just had started.

In that particular case, I’d agree, I did have a note there this morning saying [[MORE]], but left it as that, so that part was plain laziness. I don’t know if this a problem on a larger scale. Because, I do intentionally jump to skip boring bits, especially between chapters. I kinda write them like L-cuts, with a brief summary of what we as readers missed in the beginning

The question is whether this is as annoying as a jump cut, or not too noticeable like an L-cut.

Algorithms say:

I’m using too few transition words.

Yoast in particular likes to yell at me for this.

I think it flows pretty well and is easy to read, but again, I don’t necessarily have the right feeling for the language. It also yelled at me for not using subheadlines everywhere, and for using the same sentence beginnings when I used repetition as stylistic device.

I say:

I did not follow Vogler’s Hero’s Journey.

If we take the city episode as approach to the inmost cave and the ordeal as per Vogler, it’s a bit weird that the protagonist gets no reward immediately, and instead Lily is both reward and road back much later on. If (as I originally intended) take the path to the volcano as approach to the inmost cave, it’s lacking a fight between good and evil (ie the ordeal) altogether.

Either way, I break with this scheme further by having the resurrection before the road back.

I did not follow Swain’s Scene and Sequel method

… in which characters have a goal, followed by a conflict, followed by disaster, all of which is the scene, ie the part where the plot develops, followed by reaction, dilemma and decision, ie the part where the character and story develops. I think my character could’ve been deeper if I had used it more.

I dislike my dialogues.

I just default to one character asking all the time and the other answering the questions. I tried to get away from this as often as possible, but the dialogues still feel kinda meh.

I keep shifting into indirect speech and speech summaries.

This probably is because I dislike my dialogues, so rather than improving them, I try to avoid writing them. This of course abstracts the dialogue to the point where you’re not really in the story, and instead reading the summary of the story.

The end comes by too quickly.

This of course is due to limited time and my long training with the flash fiction, which I have to bring to an end super fast when I run out of time (usually <90 mins). I had an entire segment planned with the volcano, with the protagonist meeting people there which help him sail to great heights using the volcano updrafts, followed by an air battle followed by him meeting the captain and the crew again — but all that kinda got binned because I really had to finish this. As with some of my other recent texts, because it’s part of the Creative Writing course of the IDW.

All in all, I think this story went surprisingly well, given the limited time and it being the first time I attempt something in this length. I may do more things like this in the future.

But for now, I’ll probably go back to Blender and video effects for a bit. And I need to redesign my website, too. See you in a few weeks, or a few months!

Oh, before I forget: Comments are open on this post. I typically keep them closed because I’ve gotten nothing but bots so far here, but, you know, maybe some of you would like to share their praise and criticism here. Go ahead, but note that I will keep your data with me if you do comment.

Die Scheinheiligkeit der Verwerter

Eine Gruppe von Verwertern, Zeitungs- und Zeitschriftenverlagen hat eine PR-Kampagne namens “Gerechtes Netz” gegen Google, Amazon und Facebook (kurz GAF) gestartet. In dieser Kampagne werben sie für mehr Datenschutz, für weniger Steuerschlupflöcher, gegen Monopole und für Jugendschutz. Das Ziel dieser Kampagne ist laut einem internen Schreiben, Politiker, Beamte und Richter GAF-feindlicher zu stimmen, wohl damit sie Artikel 15 und 17 der Urheberrechtsreform (die bis Juni 2021 in nationales Recht umgesetzt werden muss) eher im Sinne der Verwerter als im Sinne von GAF umsetzen. Vielleicht auch nicht.

So weit, so uninteressant. Es ist wenig überraschend, dass die Lobbyarbeit der Verwerter da weitermacht, wo sie mit der Urheberrechtsreform aufgehört hat. Was mich aber dann doch stört ist die Scheinheiligkeit des ganzen, denn fast alles, was an GAF bemängelt wird, passiert ständig bei den Kampagneinitiatoren selbst.

Datenschutz und das Grundrecht auf informationelle Selbstbestimmung

Es ist durchaus richtig, dass die Menge der Daten, die bei GAF anfallen, problematisch ist. Nur leisten die Verlage ihr eigenes, damit noch mehr Daten anfallen: sammelt Daten mit über 60 verschiedenen Dienstleistern, inklusive Google. Manche davon sind unglaublich intransparent: Sourcepoint platziert ein Trackingpixel unter dem Namen “”. Wie der Name schon sagt, handelt es sich hier um einen Adblock-blocker. Criteo, ein anderer Datensammler, verbindet Online-Tracking mit Offline-Tracking, wodurch einem Werbung anhand der Verweildauer in einem Geschäft personalisiert werden kann.

Sehr witzig ist auch die Strategie von Wer nicht getrackt werden will, kann ein “PUR”-Abo für 6€/Monat abschließen. Dafür wird dann nur Email, Passwort, (optional) Telefonnummer, Zahlungsinformationen und eine “Eindeutige Kennung des Gerätes” (also ein Browser-Fingerprint?) für mindestens 10 Jahre gespeichert.

Wenn man den Datenschutz zur Privatsphäre erweitert, kommt noch ein weiterer Aspekt dazu: Wie oft sieht man im BILDblog den Hinweis “Unkenntlichmachung von uns”, weil irgendwelche Medienschaffenden entschieden haben, dass eine zufällige Person in einer meist eher ungünstigen Lage ab sofort in die Öffentlichkeit und von Millionen erkannt gehört? Wie oft müssen einstweilige Verfügungen ausgestellt werden, weil Medienschaffende in das Privatleben von Personen eingedrungen sind?

Geld machen mit den Inhalten anderer

Der Grund hinter dem Leistungsschutzrecht für Presseverleger: GAF machen Geld mit Inhalten von Zeitungen. Ich werde hier jetzt nicht die Argumente gegen das LSR wiederholen, das hatten wir dieses Jahrzehnt schon oft genug, aber auch hier kommt die Scheinheiligkeit zum Ausdruck: Online-Zeitungen und Fernsehsender machen oft genug selbst Geld mit sog. “Freebooting”. Dabei wird ein “virales” oder “irres” Video von irgendwo genommen und manchmal mit Text oder Voiceover versehen, und dann in den eigenen Videoplayer hochgeladen, der natürlich vor dem Clip noch Werbung spielt.

Wenn Viralhog et al. das Video zur Lizenzierung anbieten, wird vielleicht lizenziert, wenn nicht, werden die Urheber meist einfach nicht von der Verwendung in Kenntnis gesetzt. Sollten die Urheber doch Wind davon bekommen, wird das Video auf Anfrage entfernt, aber zu diesem Zeitpunkt hat der “Freebooter” schon den Großteil des Umsatzes mit dem Video eingefahren.

Monopole und Meinungsvielfalt

Die Verlegerverbände meinen, dass die Monopolstellung von Google die Wirtschaftsgrundrechte der Verlage verletzt, was die Meinungsvielfalt gefährdet und vom LSR gerettet werden kann. Für mich ein bisschen abstrakt, aber meinetwegen.

Sehr problematisch für die Meinungsvielfalt ist hingegen die zunehmende Konzentration der Zeitungen.

Egal ob man Kieler Nachrichten, Hamburger Morgenpost oder Berliner Kurier liest: Der überregionale Teil kommt vom RedaktionsNetzwerk Deutschland.
In Nordwestmecklenburg konkurrieren die Lübecker Nachrichten und die Ostseezeitung – aber der Lokalteil für beide wird von einer gemeinsamen Redaktion geschrieben.
Ich bin mit dem Flensburger Tageblatt aufgewachsen, meine Großeltern hatten den Schleiboten. Beide kommen aus dem sh:z, sehen gleich aus und hatten – abgesehen vom Lokalteil – genau die gleichen Texte. Andere Zeitungen gibt es nicht für diese Regionen. Es hat lange gebraucht, bis ich realisiert habe, dass verschiedene Zeitungen normalerwiese verschiedene Texte abdrucken sollten, und dass Meinungsvielfalt in Regionalmedien eigentlich auch vorhanden sein sollte.
Siehe auch: Der bunte Kiosk der Presselandschaft – Die Anstalt vom 22. Mai 2018.

Nun kann man sagen, dass es Googles Schuld ist, dass Redaktionen so zusammengestrichen werden müssen. Man könnte aber auch fast glauben, dass man im Angesicht der Digitalisierung neue Geschäftsideen braucht.


Der Jugendschutz ist wahrscheinlich eher ein “Innocence in Danger”-Punkt und für die Verwerter nur als Emotionsmanipulationswerkzeug zu gebrauchen, und natürlich ist auch hier die Scheinheiligkeit am Werk: einfach mal auf gehen, dann auf “news”, dann auf “BILD-Girl”, und sofort sieht man Titten ohne jegliche Altersabfrage. Oder auf “Unterhaltung”, dann “Erotik” und ein bisschen runterscrollen, dann ist man bei “Visit-X Girls”, eine Seite, dessen Beschreibung “Amateure in der Sex-Cam, unzensierte HD-Pornofilme & Live-TV für Erwachsene” ist. Oder auf “Video” und dann noch mal “BILD-Girl” oder wahlweise “sexy clips”.

Hier ist eindeutig die Unschuld in Gefahr.

Aber auch bei Nachrichten kommt es oft vor, dass verstörende Bilder und Videos in der Berichterstattung gezeigt werden. Manchmal steht was mit “enthält verstörende Bilder” dabei, manchmal nicht, und ich habe noch nie bei Online-Zeitungen irgendeine Art von Altersverifikation gesehen, sei es “gib dein Alter ein” oder “du brauchst einen Account [in dem dein Alter 18+ ist, aber das sagen wir dir hier nicht] um dieses Video ansehen zu können”.


Diese PR-Kampagne macht (absichtlich) den Fehler, nur über GAF zu reden, obwohl das gesamte Internet inklusive den Initiatoren mit drinsteckt. Wenn GAF auf einmal Datenschutz ernst nehmen würde, würde sich wenig ändern, denn die gesamte Branche ist mittlerweile ein Wimmelbild, in dem es schwierig ist, die großen Player überhaupt zu finden. Und wenn Google die Mindgeek-Pornseiten komplett de-indexieren würde, würden Kinder bei der Suche nach “sexy clips” immernoch finden.

Aber vielleicht ist jetzt ein guter Zeitpunkt, selbst die Kampagne zu erweitern, und Politiker von den Vorzügen einer starken ePrivacy-Verordnung zu überzeugen. Vielleicht fühlt die VG Media dann ihren “Wunsch nach einer breiteren Diskussion für alle Bürger” besser erfüllt.

Warum einfach, wenn’s auch kompliziert geht? Das Schleswig-Holsteinische Semesterticket


In Schleswig-Holstein gibt es jetzt ein landesweites Semesterticket, und das ist im Prinzip was gutes. Nur in der Ausführung macht es unnötig viel falsch.

Das fängt schon beim Bestellprozess an. Das Ticket basiert auf dem Solidaritätsprinzip, es zahlt also jeder, egal ob es genutzt wird oder nicht. Doch nur weil man es bezahlt hat, heißt es nicht, dass man das Ticket auch bekommt, nein. Man muss es erst über bestellen. Jedes Semester neu.

Weiter geht es mit der Lieferung: Man bekommt das Ticket entweder als App, oder als Papierticket. Das Papierticket kommt per Post, und man kann es sich im Falle eines Verlusts einmal pro Semester für 35€ neu zusenden lassen. Sobald man sich für eine Ticketvariante entschieden hat, kann man im Semester nicht mehr wechseln.

Vielleicht klingt all das im Vakuum wie eine halbwegs vernünftige Lösung, über die man sich nicht weiter aufregen muss. Aber verglichen zu den Lösungen, die bisher verwendet wurden, ist sie arg stelzig und bürokratisiert.

Ich präsentiere: Den Studentenausweis.

Der Studentenausweis ist ein regelrechtes Schweizer Taschenmesser. Er hat natürlich Hochschule, Namen und Bild drauf, aber auch Matrikelnummer (für Prüfungen), einen RFID-Chip (für Chiptüren und Mensa), einen Strichcode (für die Bibliothek) und ein wiederbeschreibbares Thermodruckfeld für das Semesterticket.

Ja, richtig gelesen. Auf dem Studentenausweis ist schon ein Semesterticket drauf, und das galt bisher (und gilt weiterhin in Flensburg und Kiel, aber nicht in Lübeck) für den ÖPNV innerhalb des städischen Verkehrsverbundes, jeweils für ein Semester. Am Anfang eines jeden Semesters kann man mit dem Ausweis zu einem Validierungsautomaten gehen, der die Karte nimmt, ein bisschen brummt, und dann das Datum vom “Gültig bis” ein halbes Jahr weiterschiebt.

Da stellt sich bei mir nur eine Frage: Warum?

Warum wird nicht einfach der Studentenausweis weiterverwendet?
Warum die strikte Trennung zwischen Papierticket und Handyticket?
Warum gibt es keine Möglichkeit für einen PDF-Download mit QR-Code, wie es bei Online-Tickets möglich ist?
Warum kompliziert, wenn es auch einfach geht?