AI’s fairness problem: understanding wrongful discrimination in the context of automated decision-making

AI and Ethics (2022)
  Copy   BIBTEX

Abstract

The use of predictive machine learning algorithms is increasingly common to guide or even take decisions in both public and private settings. Their use is touted by some as a potentially useful method to avoid discriminatory decisions since they are, allegedly, neutral, objective, and can be evaluated in ways no human decisions can. By (fully or partly) outsourcing a decision process to an algorithm, it should allow human organizations to clearly define the parameters of the decision and to, in principle, remove human biases. Yet, in practice, the use of algorithms can still be the source of wrongful discriminatory decisions based on at least three of their features: the data-mining process and the categorizations they rely on can reconduct human biases, their automaticity and predictive design can lead them to rely on wrongful generalizations, and their opaque nature is at odds with democratic requirements. We highlight that the two latter aspects of algorithms and their significance for discrimination are too often overlooked in contemporary literature. Though these problems are not all insurmountable, we argue that it is necessary to clearly define the conditions under which a machine learning decision tool can be used. We identify and propose three main guidelines to properly constrain the deployment of machine learning algorithms in society: algorithms should be vetted to ensure that they do not unduly affect historically marginalized groups; they should not systematically override or replace human decision-making processes; and the decision reached using an algorithm should always be explainable and justifiable.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,592

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Decision Time: Normative Dimensions of Algorithmic Speed.Daniel Susser - forthcoming - ACM Conference on Fairness, Accountability, and Transparency (FAccT '22).
Making Sense of Discrimination.Re'em Segev - 2014 - Ratio Juris 27 (1):47-78.
First- and Second-Level Bias in Automated Decision-making.Ulrik Franke - 2022 - Philosophy and Technology 35 (2):1-20.
Discrimination & Disrespect.Erin Beeghly - 2017 - In Kasper Lippert-Rasmussen (ed.), The Routledge Handbook of the Ethics of Discrimination. New York: Routledge. pp. 83 - 96.
Non-empirical problems in fair machine learning.Teresa Scantamburlo - 2021 - Ethics and Information Technology 23 (4):703-712.
Measuring Fairness in an Unfair World.Jonathan Herington - 2020 - Proceedings of AAAI/ACM Conference on AI, Ethics, and Society 2020:286-292.

Analytics

Added to PP
2023-02-21

Downloads
37 (#428,140)

6 months
23 (#118,481)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Hugo Cossette-Lefebvre
Aarhus University
Jocelyn Maclure
McGill University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references