Public Scrutiny of Automated Decisions
Early Lessons and Emerging Methods
Aaron Rieke, David Robinson, and Miranda Bogen
ReportAutomated decisions are increasingly part of everyday life, but how can the public scrutinize, understand, and govern them? This report from Upturn and the Omidyar Network maps out the landscape, providing practical examples and a framework to think about what has worked.
Our key findings in this report are:
Today’s automated decisions are socio-technical in nature: They emerge from a mix of human judgment, conventional software, and statistical models. The non-technical properties of these systems — for example, their purpose and constraining policies — are just as important, and often more important, than their technical particulars. Automated systems vary in their goals and design, and demand different kinds of inquiry.
Scrutiny doesn’t have to be sophisticated to be successful. Many of the most notable case studies we identified involved investigative reporting and basic observation of a system’s purpose, policies, inputs and outputs. Such approaches have led to productive public attention. However, more technically sophisticated types of scrutiny are beginning to bear fruit, especially in the realm of “black box testing.”
There are promising new methods for designing more accountable systems, but these remain largely theoretical. Researchers are working hard on new ways to detect bias in datasets, to design predictive models that are interpretable, and to verify the behavior of important software. However, these techniques — many of which require proactive cooperation from institutions of interest — are likely to remain in the lab until civil society makes a clearer case for their adoption.
Many existing legal and regulatory frameworks remain relevant to automated systems’ behavior and output, but their applicability is often unclear or untested. Some laws have been recently updated to specifically address automated decisions, but they remain largely untested. Others may require updating to remain effective in the era of widespread automation.
Related Work
Our empirical research showed that Facebook’s ad delivery algorithms effectively differentiate the price of reaching a user based on their inferred political alignment with the advertised content, inhibiting political campaigns’ ability to reach voters with diverse political views.
Across the FieldOur empirical research showed that Facebook’s “Special Audiences” ad targeting tool can reflect demographic biases. We provide experimental proof that removing demographic features from a real-world algorithmic system’s inputs can fail to prevent biased outputs.
Across the FieldIn The Atlantic, we argue that digital platforms — which deliver exponentially more ads than their newsprint predecessors — are making core civil-rights laws increasingly challenging to enforce.
Across the FieldOur empirical research showed that Facebook itself can skew the delivery of job and housing ads along race and gender lines, even when advertisers target broad audiences.
Across the Field