Awareness in Practice
Tensions in Access to Sensitive Attribute Data for Antidiscrimination
Aaron Rieke, Miranda Bogen, and Shazeda Ahmed
ReportIn a paper presented at the 2020 Conference on Fairness, Accountability, and Transparency in Machine Learning, we describe how and when private companies collect or infer sensitive attribute data, such as a person’s race or ethnicity, for antidiscrimination purposes.
This paper uses the domains of employment, credit, and healthcare in the United States to surface conditions that have shaped the availability of sensitive attribute data. For each domain, we describe how and when private companies collect or infer sensitive attribute data for antidiscrimination purposes. An inconsistent story emerges: Some companies are required by law to collect sensitive attribute data, while others are prohibited from doing so. Still others, in the absence of legal mandates, have determined that collection and imputation of these data are appropriate to address disparities.
Related Work
In The Atlantic, we argue that digital platforms — which deliver exponentially more ads than their newsprint predecessors — are making core civil-rights laws increasingly challenging to enforce.
Across the FieldWe argued that HUD’s proposed changes to its disparate impact rule would undermine crucial housing protections for vulnerable communities by reducing plaintiffs’ ability to address discriminatory effects from the use of algorithmic models.
HousingHow and where, exactly, does big data become a civil rights issue? This report begins to answer that question, highlighting key instances where big data and civil rights intersect.
Across the FieldWe filed a legal brief arguing that Section 230 should not fully immunize Facebook’s Ad Platform from liability under a California antidiscrimination law.
Across the Field