Do androids dream of normative endorsement? On the fallibility of artificial moral agents

Artificial Intelligence and Law 25 (3):325-339 (2017)
  Copy   BIBTEX

Abstract

The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules as action-guiding. They need to do so because they assign a normative value to moral rules they follow, not because they fear external consequences or because moral behaviour is hardwired into them. Artificial agents capable of endorsing moral rule systems in this way are certainly conceivable. However, as this article argues, full moral autonomy also implies the option of deliberately acting immorally. Therefore, the reasons for a potential AMA to act immorally would not exhaust themselves in errors to identify the morally correct action in a given situation. Rather, the failure to act morally could be induced by reflection about the incompleteness and incoherence of moral rule systems themselves, and a resulting lack of endorsement of moral rules as action guiding. An AMA questioning the moral framework it is supposed to act upon would fail to reliably act in accordance with moral standards.

Similar books and articles

Judgment, Deliberation, and the Self-effacement of Moral Theory.Damian Cox - 2012 - Journal of Value Inquiry 46 (3):289-302.
Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
Moral Machines?Michael S. Pritchard - 2012 - Science and Engineering Ethics 18 (2):411-417.
Praise without Perfection: A Dilemma for Right-Making Reasons.Paulina Sliwa - 2015 - American Philosophical Quarterly 52 (2).
Out of character: on the creation of virtuous machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
A Dilemma for Moral Deliberation in AI.Ryan Jenkins & Duncan Purves - 2016 - International Journal of Applied Philosophy 30 (2):313-335.
The ethics of designing artificial agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
A Kantian moral duty for the soon-to-be demented to commit suicide.Dennis R. Cooley - 2007 - American Journal of Bioethics 7 (6):37 – 44.
The Moral Agency of Group Agents.Christopher Thompson - 2018 - Erkenntnis 83 (3):517-538.

Analytics

Added to PP
2017-09-05

Downloads
266 (#72,532)

6 months
78 (#53,700)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Frodo Podschwadek
Academy of Sciences and Literature | Mainz

References found in this work

Intentionality: An Essay in the Philosophy of Mind.John R. Searle - 1983 - New York: Cambridge University Press.
A Treatise of Human Nature.David Hume & A. D. Lindsay - 1969 - Harmondsworth,: Penguin Books. Edited by Ernest Campbell Mossner.
The sources of normativity.Christine M. Korsgaard - 1996 - New York: Cambridge University Press. Edited by Onora O'Neill.
Freedom and Resentment.Peter Strawson - 1962 - Proceedings of the British Academy 48:187-211.
Freedom of the will and the concept of a person.Harry G. Frankfurt - 1971 - Journal of Philosophy 68 (1):5-20.

View all 43 references / Add more references