Critiquing the Reasons for Making Artificial Moral Agents

Science and Engineering Ethics 25 (3):719-735 (2019)
  Copy   BIBTEX

Abstract

Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,709

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
Moral Machines?Michael S. Pritchard - 2012 - Science and Engineering Ethics 18 (2):411-417.
Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
Robot Morals and Human Ethics.Wendell Wallach - 2010 - Teaching Ethics 11 (1):87-92.
Artificial moral agents: an intercultural perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
A Dilemma for Moral Deliberation in AI.Ryan Jenkins & Duncan Purves - 2016 - International Journal of Applied Philosophy 30 (2):313-335.
When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.

Analytics

Added to PP
2019-06-25

Downloads
146 (#127,909)

6 months
52 (#86,965)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

References found in this work

A Darwinian dilemma for realist theories of value.Sharon Street - 2006 - Philosophical Studies 127 (1):109-166.
Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
The role of trust in knowledge.John Hardwig - 1991 - Journal of Philosophy 88 (12):693-708.

View all 41 references / Add more references