AI, alignment, and the categorical imperative

AI and Ethics 3:337-344 (2023)
  Copy   BIBTEX

Abstract

Tae Wan Kim, John Hooker, and Thomas Donaldson make an attempt, in recent articles, to solve the alignment problem. As they define the alignment problem, it is the issue of how to give AI systems moral intelligence. They contend that one might program machines with a version of Kantian ethics cast in deontic modal logic. On their view, machines can be aligned with human values if such machines obey principles of universalization and autonomy, as well as a deontic utilitarian principle. Programming machines to do so might be useful, in their view, for applications such as future autonomous vehicles. Their proposal draws both on traditional logic-based and contemporary connectionist approaches, to fuse factual information with normative principles. I will argue that this approach makes demands of machines that go beyond what is currently feasible, and may extend past the limits of the possible for AI. I also argue that a deontological ethics for machines should place greater stress on the formula of humanity of the Kantian categorical imperative. On this principle, one ought never treat a person as a mere means. Recognition of what makes a person a person requires ethical insight. Similar insight is needed to tell treatment as a means from treatment as a mere means. The resources in Kim, Hooker, and Donaldson’s approach is insufficient for this reason. Hesitation regarding deployment of autonomous machines is warranted in light of these alignment concerns.

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Domesticating Artificial Intelligence.Luise Müller - 2022 - Moral Philosophy and Politics 9 (2):219-237.
8 Rightful Machines.Ava Thomas Wright - 2022 - In Hyeongjoo Kim & Dieter Schönecker (eds.), Kant and Artificial Intelligence. De Gruyter. pp. 223-238.
Calibrating machine behavior: a challenge for AI alignment.Erez Firt - 2023 - Ethics and Information Technology 25 (3):1-8.
Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
Artificial Intelligence, Values, and Alignment.Iason Gabriel - 2020 - Minds and Machines 30 (3):411-437.

Analytics

Added to PP
2023-01-10

Downloads
600 (#31,248)

6 months
304 (#8,607)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Fritz J. McDonald
Oakland University

Citations of this work

No citations found.

Add more citations

References found in this work

Freedom of the will and the concept of a person.Harry G. Frankfurt - 1971 - Journal of Philosophy 68 (1):5-20.
Responsibility and the Moral Sentiments.R. Jay Wallace - 1994 - Cambridge, Mass.: Harvard University Press.
Elements of the philosophy of right.Georg Wilhelm Friedrich Hegel - 1991 - New York: Cambridge University Press. Edited by Allen W. Wood & Hugh Barr Nisbet.
On the proper treatment of connectionism.Paul Smolensky - 1988 - Behavioral and Brain Sciences 11 (1):1-23.
Kantian constructivism in moral theory.John Rawls - 1980 - Journal of Philosophy 77 (9):515-572.

View all 10 references / Add more references