What’s Wrong with Automated Influence

Canadian Journal of Philosophy 52 (1):125-148 (2022)
  Copy   BIBTEX

Abstract

Automated Influence is the use of Artificial Intelligence to collect, integrate, and analyse people’s data in order to deliver targeted interventions that shape their behaviour. We consider three central objections against Automated Influence, focusing on privacy, exploitation, and manipulation, showing in each case how a structural version of that objection has more purchase than its interactional counterpart. By rejecting the interactional focus of “AI Ethics” in favour of a more structural, political philosophy of AI, we show that the real problem with Automated Influence is the crisis of legitimacy that it precipitates.

Similar books and articles

Mass Surveillance: A Private Affair?Kevin Macnish - 2020 - Moral Philosophy and Politics 7 (1):9-27.
Ethics of Artificial Intelligence.Vincent C. Müller - 2021 - In Anthony Elliott (ed.), The Routledge social science handbook of AI. London: Routledge. pp. 122-137.
Automated reasoning about machines.Andrew Gelsey - 1995 - Artificial Intelligence 74 (1):1-53.
Automated space planning.Charles M. Eastman - 1973 - Artificial Intelligence 4 (1):41-64.
Exploitation via Labour Power in Marx.Henry Laycock - 1999 - The Journal of Ethics 3 (2):121--131.

Analytics

Added to PP
2021-08-17

Downloads
1,195 (#10,249)

6 months
197 (#14,369)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Claire Benn
Cambridge University
Seth Lazar
Australian National University

Citations of this work

Tightlacing and Abusive Normative Address.Alexander Edlich & Alfred Archer - 2023 - Ergo: An Open Access Journal of Philosophy 10.
Institutions, Automation, and Legitimate Expectations.Jelena Belic - forthcoming - The Journal of Ethics:1-21.

Add more citations