Artificial Intelligence and Law 25 (3):325-339 (2017)

Authors
Frodo Podschwadek
Academy of Sciences and Literature | Mainz
Abstract
The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules as action-guiding. They need to do so because they assign a normative value to moral rules they follow, not because they fear external consequences or because moral behaviour is hardwired into them. Artificial agents capable of endorsing moral rule systems in this way are certainly conceivable. However, as this article argues, full moral autonomy also implies the option of deliberately acting immorally. Therefore, the reasons for a potential AMA to act immorally would not exhaust themselves in errors to identify the morally correct action in a given situation. Rather, the failure to act morally could be induced by reflection about the incompleteness and incoherence of moral rule systems themselves, and a resulting lack of endorsement of moral rules as action guiding. An AMA questioning the moral framework it is supposed to act upon would fail to reliably act in accordance with moral standards.
Keywords Agency  Autonomy  Intentionality  Moral praiseworthiness  Moral reasons
Categories (categorize this paper)
ISBN(s)
DOI 10.1007/s10506-017-9209-6
Options
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

 PhilArchive page | Other versions
Through your library

References found in this work BETA

Freedom and Resentment.Peter Strawson - 1962 - Proceedings of the British Academy 48:187-211.
Freedom of the Will and the Concept of a Person.Harry G. Frankfurt - 1971 - Journal of Philosophy 68 (1):5-20.
The Sources of Normativity.Christine M. Korsgaard - 1996 - Cambridge University Press.

View all 43 references / Add more references

Citations of this work BETA

Fully Autonomous AI.Wolfhart Totschnig - 2020 - Science and Engineering Ethics 26 (5):2473-2485.

Add more citations

Similar books and articles

Judgment, Deliberation, and the Self-Effacement of Moral Theory.Damian Cox - 2012 - Journal of Value Inquiry 46 (3):289-302.
Ethics and Consciousness in Artificial Agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
Moral Machines?Michael S. Pritchard - 2012 - Science and Engineering Ethics 18 (2):411-417.
Praise Without Perfection: A Dilemma for Right-Making Reasons.Paulina Sliwa - 2015 - American Philosophical Quarterly 52 (2).
Out of Character: On the Creation of Virtuous Machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
A Dilemma for Moral Deliberation in AI in Advance.Ryan Jenkins & Duncan Purves - forthcoming - International Journal of Applied Philosophy.
The Ethics of Designing Artificial Agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
A Kantian Moral Duty for the Soon-to-Be Demented to Commit Suicide.Dennis R. Cooley - 2007 - American Journal of Bioethics 7 (6):37 – 44.
The Moral Agency of Group Agents.Christopher Thompson - 2018 - Erkenntnis 83 (3):517-538.

Analytics

Added to PP index
2017-09-05

Total views
62 ( #182,204 of 2,498,178 )

Recent downloads (6 months)
9 ( #80,465 of 2,498,178 )

How can I increase my downloads?

Downloads

My notes