It's Twilight of the Mods for Bluesky and Reddit

Moderating a site isn't easy—just ask Elon Musk. But Bluesky and Reddit are contrasting examples of how not to do it.
Collage of dark clouds a tool being forged and message bubbles
Photo-illustration: WIRED Staff; Getty Images

These are strange days for people who care about trust and safety on platforms. Historically, many people have suggested that either more effective central moderation (a platform owner intervening directly in policing the content of the platform) or better decentralized moderation (allowing users to curate their spaces through community-driven moderation) could pave the way to a better social media landscape—or, ideally, some alchemically balanced combo of the two. But, in true Silicon Valley fashion, one platform is centralizing in the worst way possible, while the other is decentralizing catastrophically.

Of late Reddit and Bluesky are showing how to fail at both—one in pursuit of an IPO that is destroying the very thing that made the site valuable to its users; the other in pursuit of a dream of decentralization that quickly tarnished the site and threw its grandiose claims into doubt.

The problems at Reddit are complex, but, in brief, the company decided to charge users for access to its application programming interface (API), which had been free since 2008. The financial motivations for this, its knock-on effects on the site’s army of volunteer subreddit moderators, and how poorly Reddit has handled the situation all, taken together, comprise a crisis for the site. The original API change prompted a mass “blackout” on the site, where moderators restricted access to their subreddits, blocking off large, popular parts of the decentralized site to users as a kind of strike.

For mods, the stakes were especially high. The API changes threatened to gut third-party applications and bots that had made their jobs significantly easier—popular third-party mobile apps for reading Reddit like RiF or Apollo were especially useful for the visually impaired moderators of /r/Blind due to their accessibility features. Many mobile features for moderation, in particular, are affected. Now, in the words of longtime /r/GirlGamers moderator Jaime Klouse, volunteer mods are “thrown back to 2015 when there were no useful tools and you had to moderate by literally reading every single comment submitted to your communities.”

Reddit has always been a byword for toxicity, but /r/GirlGamers, an inclusive gaming community, was one of several subreddits that provided an alternate model, through careful enforcement of collaborative norms guided by a strong sense of ethics.

With a rather apposite metaphor, moderator Swan Song told me. “We weren't just volunteer knights, we were volunteer blacksmiths and armorers too, crafting our own powerful tools to aid us in defense efforts,” she said, referring to the various tools that free API access had made possible. Another mod, iLuffhomer, said that Reddit’s saving grace was that it “allowed us to moderate as we saw fit. Now, it feels like Reddit doesn't respect what we do.”

If a community is to moderate itself, giving its regular users a stake in the day-to-day happenings of their online watering hole, it stands to reason that “empowering” them (that much beloved corporate buzzword for outsourcing responsibility) requires giving them the tools to do so. For the moment, Reddit’s moves seem designed to retain only a veneer of community moderation, undermining the very things that made it, and the site, worthwhile. It’s a warning to other, newer sites that will one day be in search of more money as well.

Meanwhile, Bluesky was sent into a tailspin when one of the site’s earliest power users, Aveta, revealed that a 16-day-old account whose name was simply the N-word had been allowed onto the site. Similar anti-Black slurs were quickly uncovered as usernames on active Bluesky accounts; the platform remains invite-only, suggesting some trollish exploitation of an invite tree. She and others led a campaign to highlight this and demand changes—such as the expansion of Bluesky’s Trust and Safety Team.

The ensuing farrago was an entirely self-inflicted wound on Bluesky’s part, and a reminder of the dangers of its moderation-light approach. (Notably, at least one of the investors from the latest round of seed funding for Bluesky has confirmed that they’ve been in touch to express their grave concerns about the N-word scandal.)

Its aspirations toward composable moderation, with most day-to-day curation and moderation of the site dictated by the preferences of users, seems to be a paragon of decentralization—an open source mentality that promises to finally empower users. In practice, however, this latest incident is just another reminder that the ultimate goal seems to be minimal central moderation of even the most obvious exploits of a platform.

When reached for comment, Bluesky’s press office mostly repeated the language of its posted apology on the platform itself. However, unlike in their thread, Bluesky admitted a “mistake” occurred. The company added that an “incident report to increase transparency and accountability” is in the making and will be published soon.

If this is true, it could represent a small step forward in restoring trust with a user base that, for now, is disproportionately made up of marginalized communities badly burned by the failures of moderation on large platforms. But skepticism is warranted. What Bluesky corporate has presented up to this point is communication that could have been generated by ChatGPT—the vague, anodyne language in its vanishingly rare public statements on the matter verge on the embarrassing. They barely qualify as promises because they border on being unfalsifiable. But the report offers the prospect of something concrete, especially if it includes meaningful action items.

Among the most important changes that the site’s most prolific users and protesters have clamored for are the expansion of Bluesky’s Trust and Safety Team. I asked for more specific, concrete information about their current T&S team, and what they hoped to expand it into. The reply I received suggested that it would compromise the T&S team’s safety to give details—despite the fact that I did not ask for names or other identifying personal information, merely numbers of staff, roles, and the like, as well as a vision for what the team could look like in the coming months. The detail I did receive said that moderation is a 24/7 exercise for the platform, claiming most reports are cleared within 24 hours. As to growth, they only suggested that they’re always on the lookout for new talent.

For the moment, it bodes poorly. In the past, Bluesky CEO Jay Graber described the idea of centralized moderation as “a bit like resolving all disputes at the level of the Supreme Court.” There’s a certain inborn suspicion of centralized moderation here, with a clear desire to slough off as much as possible to the end users. Bluesky’s reply to my questions, meanwhile, saw a little more nuance added: “We have opinionated rules detailed in our community guidelines,” they wrote, “which includes prohibiting race, sexual, gender and gender-identity based harassment and more. Content that violates these policies is taken down.” This, then, is the floor. Anything beyond that is built up by users.

Does the N-word lapse suggest such a model is unworkable? It does, at least, suggest either a threadbare commitment to maintaining that floor or a threadbare staff for doing so. It also can’t help but call into question the team’s diversity. How on earth did no one predict this? Once again, new social media platforms seem eager to prove the “ontogeny recapitulates phylogeny” theory—every new platform seems to undergo the evolution of all social media in miniature, with all the ugly, painful, phantasmagorical paroxysms that suggests.

Bluesky’s future is cloudy for many reasons. It is as disputatious and drama-driven as Twitter ever was, with one anonymous power user telling me, “I think the only thing that keeps it from being as nasty as Twitter is that knife fights are not being algorithmically encouraged.” It’s a very small blessing, as the site becomes increasingly consumed by gestural activism, dominated by people who mistake posting for praxis.

In short, Bluesky needs to better maintain its floor, and Reddit has to give its mods the power to raise the ceiling.

If there’s hope for Bluesky, however, it lies in some of the platform’s power users like Kairi, one of the storied trans shitposters who joined the site early on. As well as being funny (and endearing—her budding romance with fellow Bluesky user Bennie was inspirational to many), she also has slipped into a kind of volunteer moderator role with her “Contraption,” which is really a cluster of mute lists she shares with users. It’s proven so effective that she frequently fields requests for adding bigoted or otherwise troublesome Bluesky users to these lists to help improve the site for others. For all her devil-may-care bravura, it’s a role she seems to take seriously.

In that way, she’s rather like the /r/GirlGamers mods—nerdy and civic-minded all at once. Such people are the backbone of any attempt to decentralize moderation, and they need tools and institutional support in order to do their jobs, as Reddit’s recent struggles make clear. But users like Kairi can’t hold up the (blue)sky all on their own; after all, mute lists maintained by a private user are subject to bias, drama, and misfires, and will only go so far. Even the most civic-minded of users will also need the support of an effective central moderation team to ensure the site’s baseline remains reasonable. And as some have already pointed out, Bluesky’s affordances for user-driven moderation remain quite poor.

Of course, even if Bluesky ever achieves a delicate homeostasis of moderation, it could all go right out the window once the site is no longer invitation-only—never mind once it properly federates. But that’s a crisis for another day, it seems.

Correction July 20, 2023 3:45PM EDT: An earlier version of this article incorrectly stated that Reddit had limited access to its internal Automoderator tool.