Social choice ethics in artificial intelligence
AI and Society 35 (1):165-176 (2020)
Abstract
A major approach to the ethics of artificial intelligence is to use social choice, in which the AI is designed to act according to the aggregate views of society. This is found in the AI ethics of “coherent extrapolated volition” and “bottom–up ethics”. This paper shows that the normative basis of AI social choice ethics is weak due to the fact that there is no one single aggregate ethical view of society. Instead, the design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. These decisions must be made up front in the initial AI design—designers cannot “let the AI figure it out”. Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results. Furthermore, non-social choice ethics face similar issues, such as whether to count future generations or the AI itself. These issues can be more important than the question of whether or not to use social choice ethics. Attention should focus on these issues, not on social choice.DOI
10.1007/s00146-017-0760-1
My notes
Similar books and articles
Decision Theory and Social Ethics: Issues in Social Choice.H. W. Gottinger & W. Leinfellner - 1978 - Springer Verlag.
Decisions vs. Willingness-to-Pay in Social Choice.P. Anand - 2000 - Environmental Values 9 (4):419-430.
Rational Choice, Collective Decisions, and Social Welfare.Kotaro Suzumura - 2009 - Cambridge University Press.
Social choice and individual capabilities.Mozaffar Qizilbash - 2007 - Politics, Philosophy and Economics 6 (2):169-192.
Rejoinder to Tibor R. Machan, "Rand and Choice" (Spring 2006): Regarding Choice and the Foundation of Morality: Reflections on Rand's Ethics.Douglas B. Rasmussen - 2006 - Journal of Ayn Rand Studies 7 (2):309 - 328.
Individualism and Responsibility in the Rationalist Ethics: the Actuality of Spinoza’s Ethics.Gabriela Tănăsescu - 2015 - Dialogue and Universalism 25 (1):222-230.
Constraints and the Measurement of Freedom of Choice.Sebastiano Bavetta & Marco Del Seta - 2001 - Theory and Decision 50 (3):213-238.
Rational choice and agm belief revision.Giacomo Bonanno - 2009 - Artificial Intelligence 173 (12-13):1194-1203.
Social identity: rational choice theory as an alternative approach to conceptualization.Dmitry Davydov - 2012 - Russian Sociological Review 11 (2):131-142.
Compensatory Ethics.Chen-Bo Zhong, Gillian Ku, Robert B. Lount & J. Keith Murnighan - 2010 - Journal of Business Ethics 92 (3):323-339.
Analytics
Added to PP
2017-09-30
Downloads
113 (#111,900)
6 months
6 (#132,940)
2017-09-30
Downloads
113 (#111,900)
6 months
6 (#132,940)
Historical graph of downloads
Citations of this work
How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.
Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):239–256.
Ethics as a service: a pragmatic operationalisation of AI ethics.Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mökander & Luciano Floridi - manuscript
Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2021 - Philosophy and Technology 34 (1):7-21.
References found in this work
Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - Oxford University Press.
Better Never to Have Been: The Harm of Coming Into Existence.David Benatar - 2006 - New York ;Oxford University Press.
Better Never to Have Been: The Harm of Coming into Existence.David Benatar - 2009 - Human Studies 32 (1):101-108.