“Algorithmic audits (or `AI audits’) are an increasingly popular mechanism for algorithmic accountability; however, they remain poorly defined. Without a clear understanding of audit practices, let alone widely used standards or regulatory guidance, claims that an AI product or system has been audited, whether by first-, second-, or third-party auditors, are difficult to verify and may potentially exacerbate, rather than mitigate, bias and harm.”
Returning to the ADPPA as an example: while the language in the current draft of the bill does require audits in many cases, the details fall short of these recommendations in several ways. For the bill to be effective at protecting civil rights, requirements like disclosure of audit results and involving the stakeholders most impacted by systems will need to be added.
“[A]lthough many auditors consider analysis of real-world harm (65%) and inclusion of stakeholders who may be directly harmed (41%) to be important in theory, they rarely put this into practice.”
So, whether you’re thinking about it from a research, technology, policy, or strategy perspective, it’s well worth the time to delve into Who Audits the Auditors? Recommendations from a field scan of the algorithmic auditing ecosystem. Fortunately, it’s a very readable paper, with a wealth of information, and an excellent reference list. Kudos to the authors, and to the Algorithmic Justice League for taking on projects like this.
And if you don’t have time right now to read the paper, no worries … the authors have a short video as well!