Robot Lies in Health Care: When Is Deception Morally Permissible?

Kennedy Institute of Ethics Journal 25 (2):169-162 (2015)
  Copy   BIBTEX

Abstract

Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot’s workings, capabilities, and internal structure. The robot’s real capabilities may diverge from this mental model to the extent that one might accuse the robot’s manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). This poses the question, whether misleading or even actively deceiving the user of an autonomous artifact about the capabilities of the machine is morally bad and why. By analyzing trust, autonomy, and the erosion of trust in communicative acts as consequences of deceptive robot behavior, we formulate four criteria that must be fulfilled in order for robot deception to be morally permissible, and in some cases even morally indicated.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 89,408

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Being moral and handling the truth.Laurence Thomas - 2013 - Social Philosophy and Policy 30 (1-2):1-20.
Is health care a need?Eric Matthews - 1998 - Medicine, Health Care and Philosophy 1 (2):155-161.
Health care responsibility.Andre Vries - 1980 - Theoretical Medicine and Bioethics 1 (1):95-106.
The social determinants of health, care ethics and just health care.Daniel Engster - 2014 - Contemporary Political Theory 13 (2):149-167.

Analytics

Added to PP
2015-07-07

Downloads
65 (#222,052)

6 months
8 (#154,104)

Historical graph of downloads
How can I increase my downloads?