Will intelligent machines become moral patients?

Philosophy and Phenomenological Research 109 (1):95-116 (2023)
  Copy   BIBTEX

Abstract

This paper addresses a question about the moral status of Artificial Intelligence (AI): will AIs ever become moral patients? I argue that, while it is in principle possible for an intelligent machine to be a moral patient, there is no good reason to believe this will in fact happen. I start from the plausible assumption that traditional artifacts do not meet a minimal necessary condition of moral patiency: having a good of one's own. I then argue that intelligent machines are no different from traditional artifacts in this respect. To make this argument, I examine the feature of AIs that enables them to improve their intelligence, i.e., machine learning. I argue that there is no reason to believe that future advances in machine learning will take AIs closer to having a good of their own. I thus argue that concerns about the moral status of future AIs are unwarranted. Nothing about the nature of intelligent machines makes them a better candidate for acquiring moral patiency than the traditional artifacts whose moral status does not concern us.

Other Versions

No versions found

Analytics

Added to PP
2023-09-12

Downloads
490 (#48,747)

6 months
181 (#18,296)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Parisa Moosavi
York University

Citations of this work

Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.

Add more citations

References found in this work

Animal Liberation.Peter Singer (ed.) - 1977 - Avon Books.
Computing machinery and intelligence.Alan Turing - 1950 - Mind 59 (October):433-60.
Intention.G. E. M. Anscombe - 1957 - Proceedings of the Aristotelian Society 57:321-332.
Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2003 - Oxford University Press.

View all 79 references / Add more references