Fair machine learning under partial compliance

In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. pp. 55–65 (2021)
  Copy   BIBTEX

Abstract

Typically, fair machine learning research focuses on a single decision maker and assumes that the underlying population is stationary. However, many of the critical domains motivating this work are characterized by competitive marketplaces with many decision makers. Realistically, we might expect only a subset of them to adopt any non-compulsory fairness-conscious policy, a situation that political philosophers call partial compliance. This possibility raises important questions: how does partial compliance and the consequent strategic behavior of decision subjects affect the allocation outcomes? If k% of employers were to voluntarily adopt a fairness-promoting intervention, should we expect k% progress (in aggregate) towards the benefits of universal adoption, or will the dynamics of partial compliance wash out the hoped-for benefits? How might adopting a global (versus local) perspective impact the conclusions of an auditor? In this paper, we propose a simple model of an employment market, leveraging simulation as a tool to explore the impact of both interaction effects and incentive effects on outcomes and auditing metrics. Our key findings are that at equilibrium: (1) partial compliance by k% of employers can result in far less than proportional (k%) progress towards the full compliance outcomes; (2) the gap is more severe when fair employers match global (vs local) statistics; (3) choices of local vs global statistics can paint dramatically different pictures of the performance vis-a-vis fairness desiderata of compliant versus non-compliant employers; (4) partial compliance based on local parity measures can induce extreme segregation. Finally, we discuss implications for auditors and insights concerning the design of regulatory frameworks.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Algorithmic Fairness from a Non-ideal Perspective.Sina Fazelpour & Zachary C. Lipton - 2020 - Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society.
Fairness and Fair Shares.Keith Horton - 2011 - Utilitas 23 (1):88-93.
Democratizing Algorithmic Fairness.Pak-Hang Wong - 2020 - Philosophy and Technology 33 (2):225-244.
Why Strict Compliance?Simon Căbulea May - 2021 - In David Sobel, Steven Wall & Peter Vallentyne (eds.), Oxford Studies in Political Philosophy Volume 7. Oxford University Press. pp. 227-264.
Society-in-the-loop: programming the algorithmic social contract.Iyad Rahwan - 2018 - Ethics and Information Technology 20 (1):5-14.
Different Perspectives on Cross-Compliance.Stefan Mann - 2005 - Environmental Values 14 (4):471 - 482.
To be fair.Benjamin L. Curtis - 2014 - Analysis 74 (1):47-57.

Analytics

Added to PP
2021-09-28

Downloads
397 (#47,601)

6 months
104 (#35,834)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Citations of this work

Add more citations

References found in this work

The Imperative of Integration.Elizabeth Anderson - 2010 - Princeton University Press.
Discrimination.Andrew Altman - 2020 - Stanford Encyclopedia of Philosophy.
As If: Idealization and Ideals.Kwame Anthony Appiah - 2017 - Cambridge, MA, USA: Harvard University Press.

View all 14 references / Add more references