Mastodon and today’s fediverse are unsafe by design and unsafe by default – and instance blocking is a blunt but powerful safety tool

Do you have thoughts?  

Others remain active – and forks like glitch-soc continue to provide additional tools for people to protect themselves – but Mastodon’s pace of innovation had slowed dramatically by 2018.8  Mastodon lacks basic safety functionality like Twitter’s ability to limit replies (

Mastodon administrator screen managing instance-level blocking

“Instance-level federation choices are an important tool for sites that want to create a safer environment (although need to be complemented by user-level control and other functionality).”

–  

For some instances, defederating from Gab was based on norms: we don’t tolerate white supremacists, Gab embodies white supremacy, so we want nothing to do with them. For others, it was more a matter of safety: defederating from Gab cuts down on harassment. And for some, it was both.

Spamming #FediBlock is a good example of a situation where disagreement on a norm relates to Only Brown Mastodon’s point above about different views of defederation. Some people see spamming #FediBlock as interfering with a safety mechanism created by a queer Afro-Indigenous woman and used by many inntance admins to help protect people against racist abuse – so grounds for defederation if admins don’t take it as action. Others see spamming #FediBlock as a protest against a mechanism they don’t like, or just something to do for lulz, so see these defederations as unfair and punitive.

Even when there’s apparent agreement on a norm, interpretations are likely to differ. Consider the situation Mahal discusses above. There’s wide agreement on the fediverse that anti-Semitism is bad; as Mahal says, people making real anti-Semitic comments and resorting to hate speech “absolutely deserve the boot.” But what happens when somebody makes a post about the situation in Gaza that Zionist Jews see as anti-Semitic and anti-Zionist Jews don’t? If the moderators don’t take the posts down, are they being anti-Semitic? Conversely, if the moderators do take them down, are they being anti-Palestinian? Is defederation (or limiting) appropriate – or is calling for defederation anti-Semitic? To me, as an anti-Zionist Jew, the answers seem clear;16 once again, though, opinions differ.

And (at the risk of sounding like a broken record) in many situations, moderators – or people discussing moderator decisions – don’t have the knowledge to understand why something is racist.  Consider this example, from @futurebird@sauropod.win’s excellent Mastodon Moderation Puzzles.

“You get 4 reports from users who all seem to be friends all pointing to a series of posts where the account is having an argument with one of the 4 reporters. The conversation is hostile, but contains no obvious slurs. The 4 reports say that the poster was being very racist, but it’s not obvious to you how.”

As a mod what do you do?

I saw a spectacular example of this several months ago, with a series of posts from white people questioning an Indigenous person’s identity, culture, and lived experiences. Even though it didn’t include slurs, multiple Indigenous people described it as racist … but the original posters, and many other white people who defended them, didn’t see it that way. The posts eventually got taken down, but even today I see other white people characterizing the descriptions of racism as defamatory.

So discussions about whether defederation (or limiting) is appropriate often become contentious in situations when …

  • an instance’s moderators frequently don’t take action when racist, misogynistic, anti-LGBTQ+, anti-Semitic, anti-Muslim, or casteist posts are reported – or only take action after significant pressure and a long delay
  • an instance hosts a known racist, misogynistic, or anti-LGBTQ+ harasser
  • an instance’s admin or moderator is engaging in – or has a history of engaging in – harassment
  • an instance’s admin or moderator has a history of anti-Black, anti-Indigenous, or anti-trans activity
  • an instance’s members repeatedly make false accusations that somebody is racist or anti-trans
  • an instance’s members try to suppress discussions of racist or anti-trans behavior by brigading people who bring the topics up (or spamming the #FediBlock hashtag)
  • an instance’s moderators retaliate against people who report racist or anti-trans posts
  • an instance’s moderator, from a marginalized background, is accused of having a history of sexual assault – but claims that it’s a false accusation, based on a case of mistaken identity
  • an instance’s members don’t always put content warnings (CWs) on posts with sexual images for content from their everyday lives17

Similarly, there’s often debate about if and when it’s approprate to re-federate. What if an instance has been defederated because of concerns that an admin or moderator is a harasser who can’t be trusted, and then the person steps down? Or suppose an multiple admittedly-mistaken decisions by an instance’s moderators that impacted other instances leads to them being silenced, but then a problematic moderator leaves the instance and they work to improve their processes. At what point does it make sense to unsilence them? What if it turns out the processes haven’t improved, and/or more mistakes get made?

Transitive defederation – defederating from all the instances that federate with a toxic instance – is particularly controversial. Is it grounds for defederation if an instance federates with a white supremacist instance like Stormfront or Gab, or an anti-trans hate instance like kiwiframs? Many see federating with an instance that tolerates white supremacists as tolerating white supremacists; others don’t. Some agree that it’s tolerating white supremacists but don’t see that as grounds for defederation.

Norm-based transitive defederation can be especially contentious, but there can also be disagreements about safety-based transitive defederation.  In Why just blocking Meta’s Threads won’t be enough to protect your privacy once they join the fediverse, for example, I describe how indirect data flows could leave people at risk without transitive defederation. Similarly, Erin Kissane’s excellent Untangling Threads recommends that people wanting “reasonably sturdy protection” from hate groups on Threads consider being on an instance “that federates only with servers that also refuse to federate with Threads”. Sean Tilley’s Getting Tangled Up in Threads, however, describes admins desire to protect users from groups like Libs of TikTok (who as Kissane notes “named and targeted two hundred and twenty-two individual employees of schools or education organizations in just the first four months of 2022″) by transitively blocking Threads as a “problem” and nots that many people are concerned that this “hysteria” could lead to fragmentation, “effectively doing Meta’s job for free.”18

To be continued!

Up next, a discussion of blocklists.  Here’s a sneak preview:

With hundreds of problematic instances out there, blocking them individually can be tedious and error-prone – and new admins often don’t know to do it..  Starting in early 2023, Mastodon began providing the ability for admins to protect themselves from hundreds of problematic instances at a time by uploading blocklists (aka denylists)….

[W]idely-shared blocklists introduce risks of major harms – harms that are especially likely to fall on already-marginalized communities….

It would be great if Mastodon and other fediverse software had other good tools for dealing with harassment and abuse to complement instance-level blocking – and, ideally, reduce the need for blocklists. But it doesn’t, at least not yet….

So despite the costs of instance-level blocking, and the potential harms of blocklists, they’re the only currently-available solution for dealing with the hundreds of Nazi instances – and thousands of weakly-moderated instances, including some of the biggest, where moderators frequently don’t take action on racist, anti-LGBTAIQ2S+, anti-Semitic, anti-Muslim, etc content.  As a result, today’s fediverse is very reliant on them.  

To see new installments as they’re published, follow @thenexusofprivacy@infosec.exchange or subscribe to the Nexus of Privacy newsletter.


Notes

1 According to fedidb.org, the number of monthly active fediverse users has decreased by about 20% since January 2022.

2 I’m using LGBTQIA2S+ as a shorthand for lesbian, gay, gender non-conforming, genderqueer, bi, trans, queer, intersex, asexual, agender, two-sprit, and others (including non-binary people) who are not straight, cis, and heteronormative. Julia Serrano’s trans, gender, sexuality, and activism glossary has definitions for most of terms, and discusses the tensions between ever-growing and always incomplete acronyms and more abstract terms like “gender and sexual minorities”. OACAS Library Guides’ Two-spirit identities page goes into more detail on this often-overlooked intersectional aspect of non-cis identity.

3 Which is why the footnote numbers are currently a bit strange: footnotes 4, 5, 6, 7, and 8 are in the yet-to-be-published revised intro. But then, my footnote numbers are often a bit strange … I’m sure by the time I’m done there will be footnotes with decimal points in the numbers.3.1

3.1 Like this!

8 Virtually all the issues and obvious next steps I discussed in 2017-18’s Lessons (so far) from Mastodon remain issues and obvious next steps today.

9 Bonfire’s boundaries support includes the ability to limit replies; Streams has supported limiting replies for years; GoToSocial plans to add this functionality early next year. Heck, even Bluesky has announced plans to add this. emceeaich’s 2020 github feature request Enable Twitter-style Reply Controls on a Per-Toot Basis includes a comment from glitch-soc maintainer ClearlyClaire describing some of the challenges implementing this in Mastodon; Claire’s Federation Enhancement Proposal 5624 and the discussion under it has a lot more detail.

10 It’s on the new privacy and reach screen, recently introduced in version 4.2. This screen isn’t available in Mastodon’s official iOS app (not sure about Android), so I’m not sure how many users even know about it.

And there’s a caveat here: if this setting’s enabled, Mastodon silently discards DMs, which often isn’t what’s wanted; auto-notification of “I don’t accept DMs” and/or the equivalent of Twitter’s message requests would make the functionality more useful.

11 authorized fetch, also known as “secure mode”. There are caveats here as well. For one thing, as the lengthy warning in Mastodon’s documentation describes, turning it on has some significant drawbacks. And even with it enabled, protection is still limited. For one thing, public and unlisted posts can still be visible – to blocked users and everybody else – through a browser as long as you’re not logged in, unless the admin has turned on another option that also has drawbacks. This is similar to public Twitter or Instagram posts; but Twitter and Instagram offer the option of making your profile private, which means your posts are no longer visible, and Mastodon doesn’t have equivalent functionality. And unlisted Mastodon posts are visible to anybody browsing your profile with a not-logged-in browser, in contrast with unlisted YouTube videos. Not only that, unlisted Mastodon posts can even wind up in Google searches … talk about violating the principle of least surprise!

12 That’s right: this valuable anti-harassment functionality has been implemented for six years but Rochko refuses to make it broadly avaiable. Does Mastodon really prioritize stopping harassment? has more, and I’ll probably rant about it at least one more time over the couse of this series

13 LIMITED_FEDERATION_MODE. And guess what, there’s another caveat: you can’t combine LIMITED_FEDERATION_MODE with instance-level blocking.

14 As I said a few months ago, describing an incident where an admin defederated an instance and then on further reflection decided it had been an overreaction,

“[A]fter six years why wasn’t there an option of defederating in a way that allows connections to be reestablished when the situation changes and refederation is possible? If you look in inga-lovinde ‘s Improve defederation UX March 2021 feature request on Github, it’s pretty clear that it’s not the first time stuff like this happened.”

And it wasn’t the last time stuff like this happened either. In mid-October, a tech.lgbt moderator decided to briefly suspend and unsuspend connections to servers that had been critical of tech.lgbt, in hopes that it would “break the tension and hostility the team had seen between these connections.” Oops. As the tech.lgbt moderators commented afterwards, “severing connections is NOT a way to break hostility in threads and DMs.”

15 transmisia – hate for trans people – is increasingly used as an alternative to transphobia. More on the use of -misia instead of phobia, see the discussion in Simmons University’s Anti-oppression guide

16 For an in-depth exploration of this topic, see Judith Butler’s Parting Ways: Jewishness and the Critique of Zionism, which engages Jewish philosophical positions to articulate a critique of political Zionism and its practices of illegitimate state violence, nationalism, and state-sponsored racism.

17 woof.group’s guidelines, for example, don’t require CW’s on textual posts unless “it’s likely to cause emotional distress for a general leather audience” – but others may have different standards for what causes emotional distress.

18 Should the Fediverse welcome its new surveillance-capitalism overlords? Opinions differ! has more on various opinions on whether or not to block Meta