{"id":4135,"date":"2022-11-22T18:07:37","date_gmt":"2022-11-22T18:07:37","guid":{"rendered":"https:\/\/2024.thenexus.today\/index.php\/2022\/11\/22\/consent-automated-systems-and-discrimination\/"},"modified":"2024-01-20T05:23:39","modified_gmt":"2024-01-20T05:23:39","slug":"consent-automated-systems-and-discrimination","status":"publish","type":"post","link":"https:\/\/2024.thenexus.today\/index.php\/2022\/11\/22\/consent-automated-systems-and-discrimination\/","title":{"rendered":"Consent, Automated Systems, and Discrimination (FTC Comments)"},"content":{"rendered":"<p><em>Submitted to regulations.gov a couple of hours before last night&#8217;s deadline. \u00a0An earlier version of points 2-4 appeared in the <a href=\"__GHOST_URL__\/comments-for-ftc-public-forum-for-commercial-surveillance\/\">extended remix of my public comments<\/a> in September.<\/em><\/p>\n<p>Thank you for your attention to the pressing issues of commercial surveillance and data security. \u00a0As the author of the <a href=\"https:\/\/thenexusofprivacy.net\">Nexus of Privacy Newsletter<\/a>, I write about commercial surveillance and other connections between technology, policy, and justice. \u00a0My career includes<a href=\"http:\/\/web2.cs.columbia.edu\/~junfeng\/08fa-e6998\/sched\/readings\/prefix.pdf\"> founding a successful software engineering startup<\/a>; General Manager of Competitive Strategy at Microsoft; and<a href=\"https:\/\/cacm.acm.org\/blogs\/blog-cacm\/91829-computers-freedom-and-privacy-in-a-networked-society-june-15-18-in-san-jose-and-cyberspace\/fulltext\"> co-chairing the ACM Computers, Freedom, and Privacy Conference<\/a>. In fall 2021, I was a member of the<a href=\"https:\/\/watech.wa.gov\/privacy\/projects-and-initiatives\"> Washington state Automated Decision-making Systems Workgroup<\/a>, and I have testified about privacy legislation in over a dozen Washington state legislature hearings.<\/p>\n<p>My comments focus primarily on consent, discrimination, algorithmic error, and automated systems, although also touch on questions related to data minimization and other aspects of privacy. \u00a0In summary:<\/p>\n<ol>\n<li><strong><strong><strong>Consent is a vital complement to data minimization and completely prohibiting some commercial surveillance activities. \u00a0Opt-out is not meaningful affirmative consent, and an opt-in approach to regulation will enhance innovation. (Questions 26, 73-81)<\/strong><\/strong><\/strong><\/li>\n<li><strong><strong><strong>Algorithmic error and discrimination is pervasive across multiple sectors \u2013 and the harms fail disproportionately on the most vulnerable people. (Questions 53, 57, 65, 66, 67)<\/strong><\/strong><\/strong><\/li>\n<li><strong><strong><strong>The FTC should build on the recommendations of Algorithmic Justice League\u2019s <\/strong><a href=\"https:\/\/facctconference.org\/static\/pdfs_2022\/facct22-126.pdf\"><strong>Who Audits the Auditors<\/strong><\/a><strong>, the White House OSTP\u2019s <\/strong><a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/algorithmic-discrimination-protections-2\/\"><strong>Blueprint for an AI Bill of Rights<\/strong><\/a><strong>, and the <\/strong><a href=\"https:\/\/medium.com\/artificial-intelligence-ai-for-social-impact\/xx-22faff63228f\"><strong>California Privacy Protection Agency\u2019s AI equity work<\/strong><\/a><strong>. (Questions 41-46, 56, 67)<\/strong><\/strong><\/strong><\/li>\n<li><strong><strong><strong>The FTC should develop its regulations working with the people most likely to be harmed by commercial surveillance \u2013 and prioritize their needs. (Questions 29, 39, 43)<\/strong><\/strong><\/strong><\/li>\n<\/ol>\n<figure class=\"kg-card kg-image-card kg-card-hascaption\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/gX-DC9Ea0jEkbudFoZwZUrvXQh2AsbW5f3RFA1amhWx_xj9PdXBZwqbZwO4VJcMOb17B_DM-f_Dg4wCK0OYSY2HUa_Q_YUzbUwC99mMizcc10r8HBVaBnI2clN0oP9WSwNDJjb0DtgPWLyassF3mgPVEW3eQWDzeqLjCX_oIIpoR6GntdnkVphViV8PDLA\" class=\"kg-image\" alt=\"A person saying :Technologies reflect the biases of the makers and implicit rules of society&quot; with a photo of an incarcerated Black person in the background\" loading=\"lazy\" width=\"624\" height=\"347\"><figcaption>Malkia Cyril, at Personal Democracy Forum 2017, originally tweeted by @anxiaostudio<\/figcaption><\/figure>\n<h3 id=\"1-consent-is-a-vital-complement-to-data-minimization-and-completely-prohibiting-some-commercial-surveillance-activities-opt-out-is-not-meaningful-affirmative-consent-and-an-opt-in-approach-to-regulation-will-enhance-innovation-questions-26-73-81\">1. Consent is a vital complement to data minimization and completely prohibiting some commercial surveillance activities. \u00a0Opt-out is not meaningful affirmative consent, and an opt-in approach to regulation will enhance innovation. (Questions 26, 73-81)<\/h3>\n<p>Privacy scholarship, and practical experience, has clearly shown the limitations of purely consent-based approaches. \u00a0Businesses that profit from commercial surveillance will use misleading tactics to get people to consent, bombard people with requests to induce \u201cconsent fatigue\u201d, or bribe or coerce consent. \u00a0So commercial surveillance activities with known discriminatory and human rights impacts like face surveillance, social scoring, and crime prediction should be prohibited. \u00a0The \u201cunacceptable risk\u201d category of the EU\u2019s AI Act is a good starting point, although as <a href=\"https:\/\/edri.org\/our-work\/the-eus-artificial-intelligence-act-civil-society-amendments\/\">the civil society amendments available on EDRI\u2019s site<\/a> highlight, needs to be expanded \u2013 for example, it doesn\u2019t currently prohibit emotion recognition.<\/p>\n<p>However, there are many commercial surveillance activities that are not likely to be prohibited by these regulations. \u00a0Location tracking is one good example. While this data can certainly be abused, it can also power very useful services; different people will make different tradeoffs as to whether and when they want to be tracked. \u00a0First-party targeted advertising is another. \u00a0People who trust a company may well wish to see more-relevant ads, both for their own use and to help the company\u2019s business. \u00a0On the other hand, people who do not have a trust relationship with the company (or distrust some of the service providers the company uses) may not want to have their data used to target ads.<\/p>\n<p>In situations like this, consent is crucial \u2013 and by consent, I mean affirmative, informed consent, also known as \u201copt in\u201d.<\/p>\n<p>\u201cOpt out\u201d approaches, by contrast, <em>assume <\/em>consent. \u00a0As a Washington state resident said in testimony in a 2021 state legislative hearing, an &#8220;opt-out&#8221; approach lets anybody come into your house and rummage around in your drawers \u2013 without being invited in \u2013 until you tell them to go away.<\/p>\n<p>\u201cOpt out\u201d also leads to major biases, for example against disabled people. Even if regulation requires opt-out pages and privacy policies to be accessible, there\u2019s no reason to believe that they will be in practice; after all, <a href=\"https:\/\/www.jdsupra.com\/legalnews\/court-finds-domino-s-pizza-violated-the-2182635\/\">courts have found that non-accessible websites violate the ADA<\/a>, but an astonishing <a href=\"https:\/\/webaim.org\/projects\/million\/\">98.6% of all web home pages have accessibility errors<\/a> \u2013 an average of over 50 errors per page.<\/p>\n<p>People who have limited reading or technology skills and people who are not native English speakers are also heavily impacted by opt out. \u00a0And while advanced opt-out approaches such as a global privacy control help techies who have their own devices and don\u2019t need assistive technologies, they do not fully address the problem.<\/p>\n<p>Commercial surveillance providers often object to opt-in saying it will be too hard to use, but the extremely positive response to Apple\u2019s App Tracking Transparency clearly shows that\u2019s a lie. The reality is that most tech companies haven\u2019t devoted any effort at all to making opt-in easy. \u00a0After all, current privacy regulation is almost exclusively opt-out, so their business interests are better served by putting their effort into making opt-out as hard as possible.<\/p>\n<p>So opt-in is also an excellent example of the opportunities for new regulations to enhance innovation \u2013 and enhance the development of products that protect our privacy.<\/p>\n<h3 id=\"2-algorithmic-error-and-discrimination-is-pervasive-across-multiple-sectors-%E2%80%93-and-the-harms-fail-disproportionately-on-the-most-vulnerable-people-questions-53-57-65-66-and-67\">2. Algorithmic error and discrimination is pervasive across multiple sectors \u2013 and the harms fail disproportionately on the most vulnerable people (Questions 53, 57, 65, 66, and 67)<\/h3>\n<p>Technology can be liberating \u2026 but it can also reinforce and magnify existing power imbalances and patterns of discrimination. \u00a0As documented by researchers like Dr. Safiya Noble in <em>Algorithms of Oppression<\/em> and Dr. Joy Buolamwini, Timnit Gebru, and Inioluwa Deborah Raji in the <em>Gender Shades <\/em>project, today\u2019s algorithmic systems are error-prone and the datasets they\u2019re trained on have significant biases.<\/p>\n<p>These errors \u2013 and the discrimination these systems cause \u2013 are prevalent across all sectors. For example, Color of Change&#8217;s<a href=\"https:\/\/act.colorofchange.org\/sign\/black-tech-agenda\"> Black Tech Agenda<\/a> talks about \u201cthe harm that algorithmic has done to Black communities regarding equitable access to housing, health care, employment, education, credit, and insurance.&#8221; \u00a0Similarly, facial recognition errors led to the arresting innocent Black people like <a href=\"https:\/\/www.wired.com\/story\/wrongful-arrests-ai-derailed-3-mens-lives\/\">Nijeer Parks, Robert WIlliams, and Michael Oliver<\/a>; and <a href=\"https:\/\/www.vox.com\/recode\/2019\/8\/15\/20806384\/social-media-hate-speech-bias-black-african-american-facebook-twitter\">Algorithms that detect hate speech online are biased against Black people<\/a>. \u00a0Of course, it is not only Black communities that are harmed by this \u2013 \u00a0<a href=\"https:\/\/www.theverge.com\/2018\/3\/21\/17144260\/healthcare-medicaid-algorithm-arkansas-cerebral-palsy\">health care allocation algorithms discriminate against disabled people<\/a>, <a href=\"https:\/\/www.law.georgetown.edu\/american-criminal-law-review\/in-print\/volume-56-number-4-fall-2019\/the-biased-algorithm-evidence-of-disparate-impact-on-hispanics\/\">automated risk assessments discriminate against Hispanic people<\/a>, <a href=\"https:\/\/www.freep.com\/story\/news\/local\/michigan\/2021\/03\/26\/judge-unemployment-midas-false-fraud-fast-enterprises-csg\/7014975002\/\">fraud-detection systems harm against unemployed people<\/a>, \u2026 the list goes on.<\/p>\n<p>And as this list indicates, the harms are usually much greater on the historically underserved communities the FTC has committed to protecting.<\/p>\n<p>Regulation is clearly needed, and it needs to be designed to protect the people who are most at risk. \u00a0However, the traditional list of \u201cprotected classes\u201d is not sufficient. Some algorithmic discrimination is intersectional \u2013 for example, Gender Shades and follow-on work highlight that facial recognition is even more inaccurate for Black women. \u00a0And, other forms of algorithmic discrimination can disproportionately harm groups like unemployed people or <a href=\"https:\/\/www.propublica.org\/article\/yieldstar-rent-increase-realpage-rent\">renters<\/a> who are not considered \u201cprotected classes\u201d.<\/p>\n<h3 id=\"3-the-ftc-should-build-on-the-recommendations-of-algorithmic-justice-league%E2%80%99s-who-audits-the-auditors-the-white-house-ostp%E2%80%99s-blueprint-for-an-ai-bill-of-rights-and-the-california-privacy-protection-agency%E2%80%99s-ai-equity-work-questions-41-46-56-67\">3. The FTC should build on the recommendations of Algorithmic Justice League\u2019s <a href=\"https:\/\/facctconference.org\/static\/pdfs_2022\/facct22-126.pdf\">Who Audits the Auditors<\/a>, the White House OSTP\u2019s <a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/algorithmic-discrimination-protections-2\/\">Blueprint for an AI Bill of Rights<\/a>, and the <a href=\"https:\/\/medium.com\/artificial-intelligence-ai-for-social-impact\/xx-22faff63228f\">California Privacy Protection Agency\u2019s AI equity work<\/a> (Questions 41-46, 56, 67)<\/h3>\n<figure class=\"kg-card kg-image-card\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/8ZgUZa-eWxHLTxakpROY30ejjvypMAzzfl3VH1oASb_hz9pJUvl47bdu48smoCGb62qc20gxq6EhziXW1CvwsqglFWTqnjywg4WUzJber3cv9O9WFP1zFRQjexMHKOpM3ET9h0ULH-l0ZrwDqvXE0oaf-tYJ1JMVINsjNiFBA2lwIQLVTvmJwLxA7UfX-A\" class=\"kg-image\" alt=\"A grid of six colored boxes, each with the one of the policy recommendations listed above\" loading=\"lazy\" width=\"624\" height=\"371\"><\/figure>\n<p>There are a lot of challenges in regulating algorithms and automated decision systems. \u00a0Fortunately, there is a lot of excellent work out there for the FTC to build on.<\/p>\n<p>The Algorithmic Justice League\u2019s founder Joy Buolamwini, research collaborator Inioluwa Deborah Raji, and Director Of Research &amp; Design Sasha Costanza-Chock (known for their work on Design Justice) recently collaborated on<a href=\"https:\/\/facctconference.org\/static\/pdfs_2022\/facct22-126.pdf\"> Who audits the Auditors: Recommendations from a field scan of the algorithmic auditing ecosyystem<\/a>, the first comprehensive field scan of the artificial intelligence audit ecosystem.<\/p>\n<p>The policy recommendations in <em>Who Audits the Auditors<\/em> highlight key considerations in several specific areas where FTC rulemaking could have a major impact.<\/p>\n<ol>\n<li>Require the owners and operators of AI systems to engage in independent algorithmic audits against clearly defined standards<\/li>\n<li>Notify individuals when they are subject to algorithmic decision-making systems<\/li>\n<li>Mandate disclosure of key components of audit findings for peer review<\/li>\n<li>Consider real-world harm in the audit process, including through standardized harm incident reporting and response mechanisms<\/li>\n<li>Directly involve the stakeholders most likely to be harmed by AI systems<br \/>in the algorithmic audit process<\/li>\n<li>Formalize evaluation and, potentially, accreditation of algorithmic auditors.<\/li>\n<\/ol>\n<p>The <a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/\">Blueprint for an AI Bill of Rights<\/a> announced in September by the White House Office of Science and Technology Policy (OSTP) is also very valuable. \u00a0The detailed recommendations in <a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/algorithmic-discrimination-protections-2\/\">Algorithmic Discrimination Protections<\/a>, <a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/data-privacy-2\/\">Data Privacy<\/a> and <a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/notice-and-explanation\/\">Notice and Explanation<\/a> sections relate directly to many of the questions in the ANPR.<\/p>\n<p>And the details of the rulemaking matter a lot. \u00a0Too often, well-intended regulation has weaknesses that commercial surveillance companies, with their hundreds of lawyers, can easily exploit. \u00a0Looking at proposals through an algorithmic justice lens can highlight where they fall short.<\/p>\n<p>For example, the <em>Who Audits the Auditors<\/em> recommendations highlight ways that proposed American Data Privacy and Protection Act (ADPPA) consumer privacy bill falls short of effective regulations:<\/p>\n<ol>\n<li>ADPPA doesn&#8217;t require independent auditing, instead allowing companies like Facebook to do their own algorithmic impact assessments. \u00a0And government contractors acting as service providers for ICE and law enforcement don&#8217;t even have to do algorithmic impact assessments! \u00a0As Color of Change&#8217;s<a href=\"https:\/\/act.colorofchange.org\/sign\/black-tech-agenda\"> Black Tech Agenda<\/a> notes &#8220;By forcing companies to undergo independent audits, tech companies can address discrimination in their decision-making and repair the harm algorithmic that bias has done to Black communities.&#8221;<\/li>\n<li>ADPPA doesn\u2019t require affirmative consent to being profiled \u2013 or even offer the opportunity to opt out.<\/li>\n<li>ADPPA doesn\u2019t mandate any public disclosure of its algorithmic impact assessments \u2013 not even summaries or key components<\/li>\n<li>ADPPA doesn&#8217;t have any requirement for including real-world harms \u2013 or even measuring the impact.<\/li>\n<li>ADPPA doesn&#8217;t have any requirement at all to involve external stakeholders in the assessment process \u2013 let alone directly involving the stakeholders most likely to be harmed by AI system.<\/li>\n<li>ADPPA allows anybody at a company to do an algorithmic impact assessment \u2013 and it&#8217;s not even clear whether it allows the FTC rulemaking authority for potential evaluation and accreditation of assessors or auditors<\/li>\n<\/ol>\n<p>Taking all of this valuable work into account starting early in the process will lead to more effective regulations.<\/p>\n<h3 id=\"4-the-ftc-should-develop-its-regulations-working-with-the-people-most-likely-to-be-harmed-by-commercial-surveillance-%E2%80%93-and-prioritize-their-needs-questions-29-39-43\">4. The FTC should develop its regulations working with the people most likely to be harmed by commercial surveillance \u2013 and prioritize their needs (Questions 29, 39, 43)<\/h3>\n<p>As Afsenah Rigot points out in <a href=\"https:\/\/www.belfercenter.org\/publication\/design-margins\">Design From the Margins<\/a>, a design process that centers the most impacted and marginalized users from ideation to production results in outcomes that are highly beneficial for all users and companies. \u00a0While Rigot focuses on product design, and AJL\u2019s recommendations make a similar point about involving the stakeholders most likely to be harmed by AI systems in an auditing process, For example, A. Prince Albert III<a href=\"https:\/\/publicknowledge.org\/hiding-out-a-case-for-queer-experiences-informing-data-privacy-laws\"> Hiding OUT: A Case for Queer Experiences Informing Data Privacy Laws<\/a>, suggests using queer experiences as an analytical tool to test whether proposed a privacy regulation protects people\u2019s privacy and \u00a0<a href=\"__GHOST_URL__\/adppa-with-a-queer-lens\/\">Stress-testing privacy legislation with a queer lens<\/a> illustrates the insights from this approach.<\/p>\n<p>So as the FTC develops these regulations, it\u2019s critical to involve the people who will be most impacted \u2013 and to make sure their needs are prioritized. \u00a0As well as the protected classes, it&#8217;s also vital to look at how proposed regulations affect pregnant people, rape and incest survivors, immigrants, unhoused people, and others who are being harmed <em>today<\/em> by commercial surveillance. \u00a0Regulations that protect them will wind up protecting <em>everybody.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Submitted to regulations.gov a couple of hours before last night&#8217;s deadline. \u00a0An earlier version of points 2-4 appeared in the extended remix of my public comments in September. Thank you for your attention to the pressing issues of commercial surveillance and data security. \u00a0As the author of the Nexus of Privacy Newsletter, I write about [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[421,16,1],"tags":[462,463,464,468],"class_list":["post-4135","post","type-post","status-publish","format-standard","hentry","category-software","category-tales-from-the-net","category-uncategorized","tag-algorithmic-justice","tag-automated-decision-systems","tag-ftc","tag-public-comment"],"_links":{"self":[{"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/posts\/4135","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/comments?post=4135"}],"version-history":[{"count":1,"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/posts\/4135\/revisions"}],"predecessor-version":[{"id":4343,"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/posts\/4135\/revisions\/4343"}],"wp:attachment":[{"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/media?parent=4135"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/categories?post=4135"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/2024.thenexus.today\/index.php\/wp-json\/wp\/v2\/tags?post=4135"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}