Improving privacy and safety in fediverse software

A resource page for a proposal for the NLNet NGI Zero Entrust Trustworthiness and data sovereignty grant.  The initial version of this page is based on the project proposal.

This project uses a technique called “threat modeling” to identify ways that current fediverse leaves people open to harassment, abuse, or data harvesting — and mitigations that can address these threats. It also include funding for initial implementation of some high-priority, relatively-low-effort mitigations in at least one EU-based fediverse software platform.

The project consists of three phases:

  • Initial research: discussions identifying and prioritizing threats and potential mitigations with development teams, instance admins, moderators, and communities
  • Development of detailed threat models, including a final report with recommendations for software improvements to improve privacy and safety
  • Implementation of some high-priority proposed mitigations in at least one EU-based fediverse software platform

In addition to the software improvements and report, deliverable artifacts include the threat models as well as documentation of the process and open-source tools to allow others to refine the models or do their own.

Have you been involved with projects or organisations relevant to this project before? And if so, can you tell us a bit about your contributions?

Yes. I’ve done threat modeling work for over two decades, and been active in the fediverse since 2011 including analyzing and writing about privacy and safety issues.

Most recently, “Threat modeling Meta, the fediverse, and privacy” (still in draft form) looks at potential mitigations to limit the amount of data a specific “threat actor” will be able to collect without consent if Facebook’s parent company Meta follows through on its plans for Threads to join the fediverse.  The recommendations at the bottom of the article include mitigations for developers and instance admins. [See the project web page at https://nexusofprivacy.net/ipsfs-nlnet or the attached “ipsfs-formatted-proposal.pdf” for links to these and other articles.]

Social threat modeling and quote boosts on Mastodon” takes a similar approach looking at migitations for harassment and abuse) and  illustrate how this technique applies to fediverse software, and also contains recommendations for developers.

Don’t tell people “it’s easy”, and seven more things Kbin, Lemmy, and the fediverse can learn from Mastodon” and “Mastodon: a (partial) history” both discuss areas where current fediverse software needs to be improved to improve privacy and safety, along with other topics.

Previous threat modeling work includes 2018’s “Social Threat Modeling” and the 2017 SXSW presentation on Diversity-friendly software (joint work with Shireen Mitchell of Stop Online Violence Against Women)., the 2007 National Academy of Sciences / CSTB report “Software for Dependable Systems:Sufficient Evidence?” and 2003’s “Beyond Stack Smashing” in IEEE Security & Privacy.

I haven’t yet done significant fediverse software development work, so plan to partner with one or more EU-based developers when developing the mitigations.

Compare your own project with existing or historical efforts.

Fediverse software development teams, with their limited resources, have not yet used threat modeling techniques to focus on privacy or safety. My work on “Social threat modeling and quote boosts on Mastodon” and “Threat modeling Meta, the fediverse, and privacy” are (as far as I know) the only published fediverse-related threat models.  Both of these are narrowly focused, very informal, and did not include any explicit requirements gathering process.  That said, they illustrate the value of this approach: identifying both straightforward short-term improvements as well as areas where more research and development is needed.

This project builds on the learnings from that work, involves more stakeholders, and includes a development phase to implement high-priority mitigations.  During the requirements stage, for example, we will solicit input (via a survey, interview, and/or group discussions) from development teams working on newer fediverse software projects like Bonfire, Kbin, Vocata (all based in the EU) as well as projects like GoToSocial, and Hajkey that prioritize safety.

What are significant technical challenges you expect to solve during the project, if any?)

The biggest challenge is that there are likely to be barriers to short-term implementation of some otherwise-attractive potential mitigations, due to design and implementation choices in current code base or limitations of the underlying ActivityPub protocol.  Fortunately, there are also likely to be many potential mitigations that are possible with the current code base or minimal changes. To identify which opportunities are most relevant for short-term implmentation with the development resources allocated on the project, we will partner with an EU-based software project in the implementation phase; the report will identify opportunities for longer-term work that requires more resources.

In addition, threat modeling processes and open-source tools have historically focused on traditional security concerns, so adapting them to these more human-focused threats will require creativity and innovation.  A fallback, if that proves too difficult during the timeframe of the project, is to use simpler tools like diagrams and spreadsheets. In any case, the deliverable of documenting the tools and processes will help others build on this work.

Describe the ecosystem of the project, and how you will engage with relevant actors and promote the outcomes?

Which actors will you involve?  Who will deploy your solution to make it a success?

Key participants in the ecosystem include community members; moderators (who are likely to be the most aware of the overall landscape of harassment); development teams; instance admins; trust and safety-focused organizations such as IFTAS;  and government agencies, civil society organizations, media, and businesses investigating or adopting the fediverse.

Engagement will start by with discussions on the fediverse, using hashtags, groups, kbin magazines, and Lemmy communities. The research phase adds interviews and group discussion sessions with key stakeholders, as well as surveys and ongoing discussions in the fediverse. Circulating drafts of the threat models and proposed mitigations provides additional for engagement and visibility.

The project proposal includes funding for software development on at least one EU based software platform to ensure at least some initial adoption.  Just as importantly, this also gives other software platforms the incentive to follow suit by also improving privacy and safety.

Success for community members is giving them more control of their personal data and reducing the amount of harassment. Achieving this will require development teams to implement implementation of mitigations to identified threats; consistent engagement with teams committed to prioritizing privacy and safety increases the likelihood of this happening.  In addition ongoing engagement with moderators, instance admins, community members, and organizations adopting (or investigating) the fediverse is likely to led to them encouraging the development teams to prioritize these improvements.