Blocklists in the fediverse

Join the discussion

It would be great if Mastodon and other fediverse software had other good tools for dealing with harassment and abuse to complement instance-level blocking – and, ideally, reduce the need for blocklists.

But it doesn’t, at least not yet.  

That needs to change, and in an upcoming installment I’ll talk about some straightforward short-term improvements that could have big impact. Realistically, though, it’s not going to change overnight – and a heck of a lot of people want alternatives to Twitter right now.  

So despite the costs of instance-level blocking, and the potential harms of blocklists, they’re the only currently-available solution for dealing with the hundreds of Nazi instances – and thousands of weakly-moderated instances, including some of the biggest, where moderators frequently don’t take action on racist, anti-Semitic, anti-Muslim, etc content.  As a result, today’s fediverse is very reliant on them.  

Steps towards better instance blocking and blocklists

“Notify users when relationships (follows, followers) are severed, due to a server block, display the list of impacted relationships, and have a button to restore them if the remote server is unblocked”

– Mastodon CTO Renaud Chaput, discussing the tentative roadmap for the upcoming Mastodon 4.3 release

Since the fediverse is likely to continue to rely on instance blocking and blocklists at least for a while, how to improve them?  Mastodon 4.3’s planned improvements to instance blocking are an important step. Improvements in the announcements feature (currently a “maybe” for 4.3) would also make it easier for admins to notify people about upcoming instance blocks.  Hopefully other fediverse software will follow suit.

Another straightforward improvement along these lines would be an option to have new federation requests initially accepted in “limited” mode. By reducing exposure to racist content, this would likely reduce the need for blocking.  

For blocklists themselves, one extremely important step is to let new instance admins know that they should consider initially blocking worst-of-the-worst instances (and their members are likely to get hit with a lot of abuse if they don’t) and offering them some choices of blocklists to use as a starting point. This is especially important for friends-and-family instances which don’t have paid admins or moderators. Hosting companies play a critical role here – especially if Mastodon, Lemmy, and other software platforms continue not to support this functionality directly (although obviously it would be better if they do!)

Since people from marginalized communities are likely to face the most harm from blocklist abuse, involvement of people from different marginalized communities in creating and reviewing blocklists is vital. One obvious short-term step is for blocklist curators – and instance admins whose blocklists are used as inputs to aggregated blocklists – to provide reasons instances are on the list, checking to see where receipts exist (and potentially providing access to them, at least in some circumstances), re-calibrating suspension vs. silencing in some cases, and so on. Independent reviews are likely to catch problems that a blocklist creator misses, and an audit trail of bias and accuracy reviews could make it much easier for instances to check whether a blocklist has known biases and mistakes before deploying it.  Of course, this is a lot of work; asking marginalized people to do it as volunteers is relying on free labor, so who’s going to pay for it?  

A few other directions worth considering:

  • More nuanced control over when to automatically apply blocklist updates could limit the damage from bugs or mistakes.  For example, Rich Felker has suggested that blocklist management tools should have safeguards to prevent automated block actions from severing relationships without notice.
  • Providing tags for the different reasons that instances are on on blocklists – or blocklists that focus on a single reason for blocking, like Seirdy’s BirdSiteLive and bird.makeup blocklist – could make it much easier for admins to use blocklists as a starting point, for example by distinguishing between instances that are on a blocklist for racist and anti-trans harassment from instances that are there only because of CW or bot policies that the blocklist curator considers overly lax.
  • Providing some access to receipts and an attribution trail of who has independently decided an instance should be blocked help admins and independent reviewers make better judgments about which blocklist entries they agree with. As discussed above, receipts are a complicated topic, and in many situations may be only partial and/or not broadly shareable; but as Seirdy’s FediNuke.txt list shows, there are quite a few situations where they are likely to be available.
  • Shifting to a view of a blocklist as a collection of advisories or recommendations, and providing tools for instances to better analyze them, could help mitigate harm in situations where biases do occur.  Emelia Smith’s work in progress on FIRES (Fediverse Intelligence Recommendations & Replication Endpoint Server) is a valuable step in this direction.
  • Learning from experiences with email blocking and IP blocking – and, where possible, building on infrastructure that already exists

What about blocklists for individuals (instead of instances)?

Good question. In situations where only a handful of people on an instance are behaving badly, blocking or muting them individually can limit the harm while still allowing connections with others on that instance. Most fediverse software provides the abilty to block and mute individuals, so it’s kind of surprising that Twitter-like blocklists and tools like Block Party haven’t emerged yet. It wouldn’t surprise me if that changes in 2024.

Identifying bias in blocklists

One of the biggest concerns about blocklists is the possibility of systemic bias. Algorithmic systems tend to magnify biases, so “consensus” blocklists require extra scrutiny, but it can happen with manually curated lists as well.

Algorithmic audits (a structured approach to detecting biases and inaccuracies) are one good way to reduce risks – although again, who’s going to pay for it  Adding additional curation and review (by an intersectionally-diverse team of people from various marginalized perspectives) could also be helpful. Hrefna has some excellent suggestions as well, such as preprocessing inputs to add additional metadata and treating connected sources (for example blocklists from instances with shared moderators) as a single source. And there are a lot of algorithmic justice experts in the fediverse, so it’s also worth exploring different anti-oppressive algorithms specifically designed to detect and reduce biases.

Of course, none of these approaches are magic bullets, and they’ve all got complexities of their own. When trying to analyze a blocklist biases against trans people, Black people, Jews, and Muslims for example:

  • There’s no census of the demographics of instances in the fediverse, so it’s not clear how to determine whether trans-led, Black-led instances, Jewish-led, or Muslim-led instances (or instances that host a lot of trans, Black, Jewish, and/or Muslim people) are overrepresented.
  • Large instances like mastodon.social are sources of a lot of racism, anti-Semitism, and Islamophbia (etc).  If a blocklist doesn’t at least limit them, does that mean it’s inherently biased against Black people?  If a blocklist does limit or block them, then it’s blocking the largest cis white led instances … so does that mean it’s statistically not biased against trans- and Black-led instances?
  • Suppose the largest Jewish instance is a source of false reporting about pro-Palestinian posts.  If it appears on the blocklist, is that evidence of anti-Jewish bias?  After all, it’s the largest Jewish instance! But if doesn’t appear on a blocklist, is it a source of anti-Palestinian bias?  And more generally, when looking at whether a blocklist is biased against Jews or Muslims, whose definitions of anti-Semitism get used?
  • What about situations where differing norms (for example whether spamming #FediBlock as grounds for defederation, or whether certain jokes are racist or just good clean fun) disproportionately affect Black people and/or trans people?
  • What about intersectional aspects such as biases against Black women or  trans people of color?

Which brings us back to a point I made earlier:

“It would be great if Mastodon and other fediverse software had other good tools for dealing with harassment and abuse to complement instance-level blocking – and, ideally, reduce the need for blocklists.”

To be continued!

Up next, a discussion of The Bad Space, a catalog of instances that can be used as the basis for various tools – including blocklists.

Here’s a sneak preview:

The Bad Space’s web site at thebad.space provides a web interface that makes it easy to look up an instance to see whether concerns have been raised about its moderation, and an API (application programming interface) making the information available to software as well.  The Bad Space currently has over 3300 entries – a bit over 12% of the  24,000+ instances in the fediverse….

The alpha version of The Bad Space had been available for a while (I remember looking up an instance on it early in the summer after an unpleasant interaction with a racist user) and there wasn’t a lot of discussion about it until mid-September… at which point suddenly there was a lot of discussion of The Bad Space.  Some of it has revolved around very valid questions and concerns about the projects, with some good criticisms and suggestions for improvements.  But a lot of it has been … heated….

It’s possible to talk about The Bad Space without being racist or anti-trans – but it’s not as easy as it sounds

To see new installments as they’re published, follow @thenexusofprivacy@infosec.exchange or subscribe to the Nexus of Privacy newsletter.

Notes

1 I’m using LGBTQIA2S+ as a shorthand for lesbian, gay, gender non-conforming, genderqueer, bi, trans, queer, intersex, asexual, agender, two-sprit, and others who are not straight, cis, and heteronormative. Julia Serrano’s trans, gender, sexuality, and activism glossary has definitions for most of terms, and discusses the tensions between ever-growing and always incomplete acronyms and more abstract terms like “gender and sexual minorities”. OACAS Library Guides’ Two-spirit identities page goes into more detail on this often-overlooked intersectional aspect of non-cis identity.

2 Suicide-baiting: telling or encouraging somebody to kill themselves.

3 Some instances share moderators, or have moderators who are friends with each other. And even if there’s no connection between instances, if somebody announces a blocking decision to #FediBlock, other instances are likely to block as well. Of course they should verify the claims before deciding to block; but in situations where that doesn’t happen and they just take the original poster’s word for it, then the additional protection of requiring multiple actions is illusory.