the case for virtuous robots

AI and Ethics 3 (1):135-144 (2023)
  Copy   BIBTEX

Abstract

Is it possible to build virtuous robots? And is it a good idea? In this paper in machine ethics, I offer a positive answer to both questions. Although moral architectures based on deontology and utilitarianism have been most often considered, I argue that a virtue ethics approach may ultimately be more promising to program artificial moral agents (AMA). The basic idea is that a robot should behave as a virtuous person would (or recommend). Now, with the help of machine learning technology, it is conceivable to get an AMA to learn from moral exemplars. To support my claim, I sketch the steps of building such a virtuous robot, using the thought experiment of programming an autonomous car facing a trolley-like dilemma situation. It appears that, at least in certain contexts, the virtue ethics approach can provide its own and original solution. I then give four reasons to favor it. Not only are virtuous robots technically feasible, but they have the advantage over their deontological and utilitarian counterparts of fostering normative consensus between these moral schools, improving social acceptability, and beginning to address the technical challenge of moral perception.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,745

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-02-27

Downloads
15 (#244,896)

6 months
15 (#941,355)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Martin Gibert
Université de Montréal

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references