ADPPA’s algorithmic impact assessments are too weak to protect civil rights — but it’s not too late to strengthen them

A covered entity or a service provider may not collect, process, or transfer covered data in a manner that discriminates or otherwise makes unavailable the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex or disability.

– American Data Privacy and Protection Act (ADDPA) Section 207(a)

Civil rights groups and privacy advocates are very justifiably excited about the inclusion of civil rights protections in ADPPA, the consumer privacy bill currently making its way through Congress.  It’s the first time this century that consumer privacy legislation has advanced from committee, and a 53-2 vote for the principle of privacy as a civil right is a huge milestone.

Still, rights are only meaningful if companies who violate them can be held accountable. ADPPA’s approach to that starts with algorithmic impact assessments (AIAs).

Done right, AIAs can play an important role in regulation.   Rep. Yvette Clarke’s Algorithmic Accountability Act of 2022, which has broad support from privacy and civil liberties groups, relies on AIAs.  How the California Privacy Protection Agency (CPPA) Advances Equity in AI explores how impact assessments complement other protections against algorithmic discrimination.  Prof. Andrew Selbst’s recent Harvard Law Review paper An Institutional View Of Algorithmic Impact Assessments discusses reasons AIAs can be an attractive regulatory option.

But that’s only if they’re done right.  

ADPPA’s AIA’s won’t protect civil rights unless they’re strengthened

Unfortunately, ADPPA’s requirements for AIAs got significantly weaker in the latest version, including removing a vital requirement that AIAs be performed by external, independent auditors or researchers.  As Selbst says, relying on good faith collaboration from the companies being regulated shifts the the compliance approach “from enforcement to encouragement.”  

Call me cynical, but Facebook has continued to run discriminatory housing ads for years despite repeatedly promising to fix the problem,* failed the civil rights audit it commissioned but hasn’t made significant changes in response, and is now whitewashing its recent human rights report in India.  “Encouraging” them – or other serial bad actors – to do better isn’t going to be effective at protecting civil rights.   We’ve known since last century that self-regulation doesn’t work for privacy; see for example Privacy Self-Regulation: A Decade of Disappointment (EPIC Privacy, 2005) and Many Failures – A Brief History of Privacy Self-Regulation (World Privacy Forum, 2011).

And on top of that, ADPPA’s AIAs have some significant loopholes – including an exemption for government contractors and other service providers.

Even worse, ADPPA’s approach to accountability also mostly ends with the algorithmic impact assessments.**  In particular, there are two crucial gaps:  

  • ADPPA doesn’t include whistleblower protections.  As Dr. Timnit Gebru of the Distributed AI Research Institute discusses in the Kapor Center’s  Mobilizing for Racial Justice: Protecting Our Communities from Algorithmic Bias, these are a key complement to other aspects of data protection regulation.  
  • ADPPA leaves out a key piece in California’s CPRA: the right to opt out of automated decision systems, including profiling.  

So as it stands right now, ADPPA won’t be able to deliver on the promise of civil rights protections.

Is “responsible AI” anything more than a PR-friendly buzzword?

The good news is that if Congress really does want to make “privacy is a civil right” a reality and protect people against discrimination, there are several straightforward and politically-achievable changes that could significantly improve ADPPA’s civil rights protections.  

But those changes are a lot more likely to happen if at least a few big tech companies support them.  So before we get to that, let’s talk about big tech companies who are committed to “responsible AI” and the like.

Microsoft’s a good example.  The company commits to Responsible AI principles including fairness, privacy, inclusiveness, and accountability; publishes its internal Responsible AI Standard;  provides an responsible AI impact assessment template and guide, as well as many additional resources and tools; and does extensive research on responsible AI.   IBM’s a good example too: they co-founded the Responsible Computing consortium.  Many other big tech companies commit to one or more of responsible AI, responsible computing,  or ethical AI.***

And yet, when we read about Microsoft and IBM’s role in ADPPA, it’s about the way they’re lobbying to weaken other areas of the bill.  Not only that, there are reports that they and their allies in the Business Software Alliance are also behind the successful push to weaken the civil rights protections, including the AIAs, in the latest version.  How responsible is that?  How ethical is that?

It’s frustrating because not only does pushing for weak regulation that doesn’t protect civil liberties clash with their values, it clashes with their strategy. Stronger regulation gives a strategic advantage to companies like Microsoft who have with a strong compliance culture, a “responsible AI” brand and the resources to back it up, and world-class research and engineering organizations.  

So if you’re an employee or customer of a company committed to responsible AI, now’s a good time to push for your company to take a stance that aligns with the company’s stated values and gives them a potential strategic advantage.  

And if you’re in “responsible AI” (or “ethical AI”, or “responsible computing”) you’ve got more leverage – and, quite frankly, also a bigger responsibility, if you want “responsible AI” (or “ethical AI”, or “responsible computing”) to be anything more than a PR-friendly buzzword.  Look for others in your company who might be supportive, including any employee resource groups who are interested in protecting against discrimination, and explore possibilities for joint action.

Politics is the art of the possible

ADPPA’s next step is a House floor vote.  Right now, closed-door negotiations are happening between lobbyists, staffers, legislators, and people from privacy and civil rights organizations.  If they can come up with a version that seems like it’ll get enough votes to pass, then it’s up to Speaker Pelosi to decide whether to bring it to the floor.  

So this is a good time to be pushing Congress to improve the civil rights protections.  And it’s very likely that with more focus, at least some progress is possible.

Two straightforward improvements could come from undoing changes in the most recent version:

  • restoring the requirement that algorithmic impact assessments be conducted by independent, external researchers or auditors

    Update, September 13: Color of Change’s Black Tech Agenda notes “By forcing companies to undergo independent audits, tech companies can address discrimination in their decision-making and repair the harm algorithmic that bias has done to Black communities regarding equitable access to housing, health care, employment, education, credit, and insurance.”

  • removing the exemption for government contractors and other service providers

The only arguments I’ve heard against these are the obvious red herring that they potentially create barriers to small companies – which is clearly nonsense, because ADPPA only requires AIA’s for companies with $250,000,000 or more annual revenue.

Whistleblower protections are easy to add.  California and Washington have already passed the Silenced No More Act.  Companies like Microsoft have announced that they’re going to provide the protections all across the US, so it should be easy for them to support it. In State of Play: U.S. Privacy Law, Anonymous Writer in the Bay characterizes improving whistleblower protections as the “most likely” improvement, and highlights their importance:

Because of the opacity of these algorithims and complex nature of data collection, these whistleblowers are more important than ever for uncovering Big Tech’s/Big Data’s misdeeds.

And it’s also easy to add the ability for people to opt-out of automated decision systems including profiling is also straightforward.  ADPPA’s supporters have said that even in its current form it won’t preempt California’s existing laws, so adding similar requirements to ADPPA won’t add any new regulatory burden to companies that are doing business in California.   One straightforward approach is simply to add language similar to CPRA’s; there may also be an opportunity to make some additional improvements.

There are also a lot of other ways to strengthen ADPPA’s civil rights protections; the upcoming Assessing the Assessments will use the Algorithmic Justice League’s recommendations as Who Audits the Auditors? to highlight other areas for improvement.  Still, if we can get these four, it would get ADPPA much closer to delivering on the promise of strong civil rights protections.

So now’s a great time to call your representative, or leave them a message through their web form.  It doesn’t have to be fancy – even just “please only support the ADPPA consumer privacy bill if the requirement for external, independent auditors is restored, whistleblower protections are added, exemptions for government contractors are removed” will get the message across. Congress.gov lets you look up your representative based on your address – or here’s a directory if you know their name or what congressional district you live in.

And as I discussed above, if you work at a big tech company, make sure to send the same message internally!  

It can be challenging to find time for these kinds of actions – but they’re certainly worth doing. ADPPA’s civil rights protections are far too important to let them turn into privacy theater.


Image credit: originally published in Audit the algorithms that are ruling our lives, by Cathy O’Neil, Financial Times, 2018

Notes

* Pro Publica first reported that Facebook advertisers could run “whites only” ads in 2016, and Facebook promised to fix the problem.  In 2017 Pro Publica tested again and reported that it wasn’t fixed; Facebook once again promised to fix it.  In 2019 Facebook settled with National Fair Housing Alliance and the American Civil Liberties Union and promised to fix the problem. Later in 2019  HUD sued Facebook over housing discrimination later that year, alleging the company’s algorithms had made the situation even worse.  Facebook settled again in June 2022, and promised to fix the problem.  Hey wait a second, I’m noticing a pattern here!

** There are Algorithmic Design Evaluations, but the requirements are so vague that they’re not likely to have an impact from a regulatory perpective.  And ADPPA’s privacy impact assessments (Sec. 301 (c) and (d)) could theoretically also help here but as Ari Ezra Waldman discusses How Big Tech Turns Privacy Laws Into Privacy Theater, these typically focus more on assessing litigation risks to the company rather than risks to consumers.

*** Google has a Responsible AI page and an AI Ethics page!  Then again they pushed out ethical AI researcher Timnit Gebru and fired ethical AI lead Margeret Mitchell so take their commitments with a grain of salt.

**** Which may or may not happen.  California – which adopted its own privacy law by referendum in 2020 – is not happy with the ADPPA’s preemption of state consumer privacy laws.   To be fair, a lot of other states also aren’t happy about preemption; Washington’s AG and grassroots Indisivible activists in Washington have also sent letters to Congress, and so has a coalition of a dozen AGs including Connecticut, Illinois, Maine, Massachusetts, Nevada, New Jersey, New Mexico, and New York.

But Calfornia is especially unhappy.  Both no votes in the committee were from California representatives, and California’s Governor, AG, state legislature leaders, and privacy authority have all sent letters to Speaker Pelosi and Minority Leader McCarthy — both of whom are from California.  So that’ll have to get sorted out for the bill to get a floor vote. Still, most people think that the bill still has a good chance to get through the House, so we should certainly plan for the possibility that they find a “dramatic” way around this “impasse.”