Machine Ethics in Care: Could a Moral Avatar Enhance the Autonomy of Care-Dependent Persons?

Cambridge Quarterly of Healthcare Ethics:1-14 (forthcoming)
  Copy   BIBTEX

Abstract

It is a common view that artificial systems could play an important role in dealing with the shortage of caregivers due to demographic change. One argument to show that this is also in the interest of care-dependent persons is that artificial systems might significantly enhance user autonomy since they might stay longer in their homes. This argument presupposes that the artificial systems in question do not require permanent supervision and control by human caregivers. For this reason, they need the capacity for some degree of moral decision-making and agency to cope with morally relevant situations (artificial morality). Machine ethics provides the theoretical and ethical framework for artificial morality. This article scrutinizes the question how artificial moral agents that enhance user autonomy could look like. It discusses, in particular, the suggestion that they should be designed as moral avatars of their users to enhance user autonomy in a substantial sense.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,098

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-01-13

Downloads
8 (#1,345,183)

6 months
8 (#415,230)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

Political Liberalism.J. Rawls - 1995 - Tijdschrift Voor Filosofie 57 (3):596-598.
Outline of a decision procedure for ethics.John Rawls - 1951 - Philosophical Review 60 (2):177-197.

View all 14 references / Add more references