Ad Delivery Algorithms
The Hidden Arbiters of Political Messaging
Aaron Rieke, Muhammad Ali, Piotr Sapiezynski, Aleksandra Korolova, and Alan Mislove
ReportPolitical campaigns are increasingly turning to digital advertising to reach voters. These platforms empower advertisers to target messages to platform users with great precision, including through inferences about those users' political affiliations. However, prior work has shown that platforms' ad delivery algorithms can selectively deliver ads within these target audiences in ways that can lead to demographic skews along race and gender lines, often without an advertiser's knowledge.
In this study, we investigate the impact of Facebook's ad delivery algorithms on political ads. We run a series of political ads on Facebook and measure how Facebook delivers those ads to different groups, depending on an ad's content (e.g., the political viewpoint featured) and targeting criteria. We find that Facebook's ad delivery algorithms effectively differentiate the price of reaching a user based on their inferred political alignment with the advertised content, inhibiting political campaigns' ability to reach voters with diverse political views. This effect is most acute when advertisers use small budgets, as Facebook's delivery algorithm tends to preferentially deliver to the users who are, according to Facebook's estimation, most relevant.
Our findings point to advertising platforms' potential role in political polarization and creating informational filter bubbles. Furthermore, some large ad platforms have recently changed their policies to restrict the targeting tools they offer to political campaigns; our findings show that such reforms will be insufficient if the goal is to ensure that political ads are shown to users of diverse political views. Our findings add urgency to calls for more meaningful public transparency into the political advertising ecosystem.
Related Work
Our empirical research showed that Facebook’s “Special Audiences” ad targeting tool can reflect demographic biases. We provide experimental proof that removing demographic features from a real-world algorithmic system’s inputs can fail to prevent biased outputs.
Across the FieldIn a paper presented at the 2020 Conference on Fairness, Accountability, and Transparency in Machine Learning, we describe how and when private companies collect or infer sensitive attribute data, such as a person’s race or ethnicity, for antidiscrimination purposes.
Across the FieldWe filed a legal brief arguing that Section 230 should not fully immunize Facebook’s Ad Platform from liability under California and D.C. law prohibiting discrimination. We describe how Facebook itself, independently of its advertisers, participates in the targeting and delivery of financial services ads based on gender and age.
Across the FieldWe filed a legal brief arguing that Section 230 should not fully immunize Facebook’s Ad Platform from liability under a California antidiscrimination law.
Across the Field