Over the last few years especially, I've observed that folks online have become increasingly superstitious. Some internet subcultures have established rituals in attempts at "appeasing" "the algorithm" – which they perceive as a sort of magical malevolent force.

I think it's interesting and a bit sad how little people seem to understand about the basic functionality of the platforms they use. Especially people who tie their careers to those platforms – Youtubers, digital artists, et cetera.

Recommendation algorithms are very complicated pieces of code, but they aren't that difficult to understand. Content that gets more watch time and engagement (clicks, likes, comments, shares) is recommended to more people. The programming also considers a variety of other factors like what type of videos you watch, what type of videos other people with a similar demographic as you watch, how long a video is (and thus how many ads it can serve you), and about a bazillion other things as well.

For some reason, though, you generally see people speaking as though the machine itself is making decisions – they talk about how the algorithm "prefers" certain types of content, how some things are "good for the algorithm." Which always strikes me as odd, because the thing that is actually happening is that people prefer certain types of content and the algorithm reflects that. If a Youtube video gets fewer views than expected, the creator often blames "the algorithm" as if it is a separate entity from viewers.

Articles and videos with clickbait titles perform better because people click on them more. Clickbait makes people want to click, so they do, so people who want lots of clicks make a lot of clickbait.

Self-censorship on TikTok is probably the most interesting and bizarre example; there is not actually evidence to back up the common belief that users are penalized or censored for using swear words or talking about sex or death. But people believe it – very strongly! They believe in it hard enough to have changed how people talk and write about sex, rape, and suicide in non-Tiktok contexts. The specter of "the algorithm" follows people regardless of medium or context; it is, again, a malignant magical force that punishes and rewards behavior according to unknowable whims.

Leave out a bowl of milk for the algorithm, or your crops will wither. If you perform the rituals correctly, the algorithm will bless you; if you fail to appease it, it will punish you.

Of course it's entirely possible that videos about sex or suicide are de-emphasized by Tiktok's (reportedly very sophisticated) algorithm for any number of reasons. Maybe children and teenagers, a massive userbase for the app, are uncomfortable with videos mentioning sex playing on their phone, so they skip them. Maybe suicide is a bummer for people looking for funny cat pics and sexy camgirls dancing, so they skip past videos talking about it.

Those things would create patterns of user behavior that would be reflected in the recommendation algorithm. These behavior patterns might be shifted slightly by using euphemisms - a kid watching without headphones might not hurriedly skip past a video about "spicy" scenes in a book the way they would a video about "sex" scenes – or they might not, because the content is the same.

If a platform has a word filter in place that penalizes you for using the word "kill," so everyone starts saying "unalive" instead, they will probably notice and add "unalive" to the word filter. Moderators and site managers are not, in fact, faeries who are honor-bound to play by certain ironclad rules. If "topic" is banned, then so is "t0pic". Saying "unalive yourself" instead of "kill yourself" is not actually going to change the outcome if the comment is reported.

At any rate, regardless of how the Tiktok algorithm does or doesn't work, it certainly can't hurt anybody in conversations IRL, on their personal websites, in DMs, or in any other context that isn't a Tiktok post. And yet! The superstition is a sticky one, it seems.

Megaplatforms do rely heavily on automated processes for moderation; they have to, with the quantities they deal with. Pretty much every platform with any kind of moderation at all uses word filtering. Word filtering also always includes obvious workarounds. And sometimes less-obvious; if you try to say cucumber on Neopets, it will tell you not to use naughty language because of the "cum." If people on a platform en masse start employing particular workarounds to try and skirt the rules, then the word filters can and will be updated. Unlike "cucumber," there is no context in which "unalive" might be used to refer to anything besides death, so that's a very easy thing to add to whatever word-banning functionality Tiktok might have. If it has: it doesn't matter which word you use, so you might as well say "die." If it hasn't: Tiktok does not actually care, so you might as well say "die."