Automated Systems and Discrimination: Comments at FTC Public Forum for Commercial Surveillance and Data Security

The September 8

Who Audits the Auditors policy recommendations

And the details of the rulemaking matter a lot.  Too often, well-intended regulation has weaknesses that commercial surveillance companies, with their hundreds of lawyers, can easily exploit.  Looking at proposals through an algorithmic justice lens can highlight where they fall short.

For example, here’s how the proposed American Data Privacy and Protection Act (ADPPA) consumer privacy bill stacks up against AJL’s recommendations:

  1. ADPPA doesn’t require independent auditing, instead allowing companies like Facebook to do their own algorithmic impact assessments.  And government contractors acting as service providers for ICE and law enforcement don’t even have to do algorithmic impact assessments!

    Update, September 13: Color of Change’s Black Tech Agenda notes “By forcing companies to undergo independent audits, tech companies can address discrimination in their decision-making and repair the harm algorithmic that bias has done to Black communities regarding equitable access to housing, health care, employment, education, credit, and insurance.”

  2. ADPPA doesn’t require affirmative consent to being profiled – or even offer the opportunity to opt out.  
  3. ADPPA doesn’t mandate any public disclosure of its algorithmic impact assessments – not even summaries or key components
  4. ADPPA doesn’t have any requirement for including real-world harms – or even measuring the impact.
  5. ADPPA doesn’t have any requirement at all to involve external stakeholders in the assessment process – let alone directly involving the stakeholders most likely to be harmed by AI system.
  6. ADPPA allows anybody at a company to do an algorithmic impact assessment – and it’s not even clear whether it allows the FTC rulemaking authority for potential evaluation and accreditation of assessors or auditors

ADPPA’s algorithmic impact assessments are too weak to protect civil rights — but it’s not too late to strengthen them goes into more detail.

More positively, other regulation in the US and elsewhere highlights that these issues can be addressed.  Rep. Yvette Clarke’s Algorithmic Accountability Act of 2022, for example, requires companies performing AIAs “to the extent possible, to meaningfully consult (including through participatory design, independent auditing, or soliciting or incorporating feedback) with relevant internal stakeholders (such as employees, ethics teams, and responsible technology teams) and independent external stakeholders (such as representatives of and advocates for impacted groups, civil society and advocates, and technology experts) as frequently as necessary.”

AJL’s recommendation of directly involving the stakeholders most likely to be harmed by AI systems also applies to the process of creating regulations.  In Hiding OUT: A Case for Queer Experiences Informing Data Privacy Laws, A. Prince Albert III (Policy Counsel at Public Knowledge) suggests using queer experiences as an analytical tool to test whether proposed a privacy regulation protects people’s privacy.  Stress-testing privacy legislation with a queer lens is an example of how effective this approach can be at highlighting needed improvements.  It’s also vital to look at how proposed regulations affect pregnant people, rape and incest survivors, immigrants, non-Christians, disabled people, Black and Indigenous people, unhoused people, and others who are being harmed today by commercial surveillance.

So I implore you, as you continue the rulemaking process, please make sure that the historically underserved communities most harmed by commercial surveillance are at the table – and being listened to.