Artificial Intelligence in a Structurally Unjust Society

Feminist Philosophy Quarterly 8 (3/4):Article 3 (2022)
  Copy   BIBTEX


Increasing concerns have been raised regarding artificial intelligence (AI) bias, and in response, efforts have been made to pursue AI fairness. In this paper, we argue that the idea of structural injustice serves as a helpful framework for clarifying the ethical concerns surrounding AI bias—including the nature of its moral problem and the responsibility for addressing it—and reconceptualizing the approach to pursuing AI fairness. Using AI in healthcare as a case study, we argue that AI bias is a form of structural injustice that exists when AI systems interact with other social factors to exacerbate existing social inequalities, making some groups of people more vulnerable to undeserved burdens while conferring unearned benefits to others. The goal of AI fairness, understood this way, is about pursuing a more just social structure with the development and usage of AI systems when appropriate. We further argue that all participating agents in the unjust social structure associated with AI bias bear a shared responsibility to join collective action with the goal of reforming the social structure, and we provide a list of practical recommendations for agents in various social positions to contribute to this collective action.

Similar books and articles

Intelligence, Artificial and Otherwise.Paul Dumouchel - 2019 - Forum Philosophicum: International Journal for Philosophy 24 (2):241-258.
Ethics of Artificial Intelligence.John-Stewart Gordon, and & Sven Nyholm - 2021 - Internet Encyclopedia of Philosophy.
Embodied artificial intelligence once again.Anna Sarosiek - 2017 - Philosophical Problems in Science 63:231-240.
On the artificiality of artificial intelligence.Hans F. M. Crombag - 1993 - Artificial Intelligence and Law 2 (1):39-49.
Consciousness, intentionality, and intelligence: Some foundational issues for artificial intelligence.Murat Aydede & Guven Guzeldere - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):263-277.


Added to PP

255 (#74,292)

6 months
55 (#70,696)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Ting-an Lin
Stanford University

Citations of this work

(Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
Acting Together to Address Structural Injustice: A Deliberative Mini-Public Proposal.Ting-an Lin - forthcoming - In Kevin Walton, Sadurski Wojciech & Coel Kirkby (eds.), Responding to Injustice. Routledge.
On Hedden's proof that machine learning fairness metrics are flawed.Anders Søgaard, Klemens Kappel & Thor Grünbaum - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.

Add more citations