What to do on a Wednesday night in San Francisco? BART over to the Longitude tiki bar in Oakland, of course, for a discussion with Ars editors Annalee Newitz and Cyrus Farivar and journalist Sarah Jeong (author of “The Internet of Garbageâ€) about online trolling and harassment! I had just been working on a wiki page with a handful of links about muting, blocking, and tools for people to protect themselves online as reference material for next week’s presentation on Supporting diversity with a new approach to software, so the lively discussion was particularly timely 🙂
The discussion covered a lot of ground, and it’s certainly worth checking out. Here’s the summary on Ars’ site, and I’ve got the full video below. In this post I’ll focus on the topic of technical approaches that can help with harassment — although as Sarah pointed out, none of these are full “solutionsâ€.  Still, incremental steps help, and combining enough of them could make a real impact.
Sarah gave a couple of examples of technologies that help: shared block lists on Twitter, and Riot Games’ use of impromptu “juries†of other players when somebody is reported for violating League of Legends’ code of conduct. Both of these innovations are forms of crowdsourced moderation: rather than having the site watch and control everything, they distribute the responsibility (and the workload). And both have had a positive, albeit limited, impact.
Shared block lists on Twitter are particularly interesting in that the initial implementations (BlockBot, flamin.ga, and Block Together) came from the community, rather than Twitter. Harassment and abuse has been a problem on Twitter for quite a while, and the company’s attempts to deal with it have often missed the mark — see for example Leigh Honeywell’s Another Six Weeks: Muting vs. Blocking and the Wolf Whistles of the Internet, describing Twitter’s disastrously bad first attempt at implementing muting. So when it came to shared block lists, you’d think that Twitter might have worked with the implementers from the community to learn from them. Instead, though, Twitter appear to have ignored them.  Unsurprisingly, Twitter’s own implementation of shared block lists is not particularly useful.
Of course, Twitter’s not the only company that hasn’t paid attention to abuse until it became a problem. Google+ and Diaspora* both launched without any muting and blocking functionality. Microsoft didn’t even put the simplest countermeasures in place for their Tay bot. Storify didn’t consider how their notification functionality could be turned into a vector for harassment. The list goes on …
In fact, I can’t think of any social software that’s started by prioritizing giving people the tools to protect themselves from harassment and abuse.
And it’s probably not a coincidence that most targets of online harassment are women, blacks and Latinxs transgender people, and other marginalized groups — while the people creating (and funding) the software we use are disproportionately cis white and asian guys. If the industry devoted even a fraction as much effort to this as to ad targeting, we’d no doubt have made a lot more progress.
Yes, it’s a hard problem. Still, there are plenty of approaches that can help – including crowdsourced moderation (where the state-of-the-art has regressed since Slashdot’s work almost 20 years ago) and the techniques that the Coral project is experimenting with.  So at some point, somebody’s going to get the “aha!†moment that there’s a huge opportunity to do better …
I can’t wait!
Leave a Reply