Artificial Companions: Empathy and Vulnerability Mirroring in Human-Robot Relations

Studies in Ethics, Law, and Technology 4 (3) (2010)
  Copy   BIBTEX

Abstract

Under what conditions can robots become companions and what are the ethical issues that might arise in human-robot companionship relations? I argue that the possibility and future of robots as companions depends on the robot’s capacity to be a recipient of human empathy, and that one necessary condition for this to happen is that the robot mirrors human vulnerabilities. For the purpose of these arguments, I make a distinction between empathy-as-cognition and empathy-as-feeling, connecting the latter to the moral sentiment tradition and its concept of “fellow feeling.” Furthermore, I sympathise with the intuition that vulnerability mirroring raises the ethical issue of deception. However, given the importance of appearance in social relations, problems with the concept of deception, and contemporary technologies that question the artificial-natural distinction, we cannot easily justify the underlying assumptions of the deception objection. If we want to hold on to them, we need convincing answers to these problems.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,891

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Should we welcome robot teachers?Amanda J. C. Sharkey - 2016 - Ethics and Information Technology 18 (4):283-297.
What Does It Mean to Empathise with a Robot?Joanna K. Malinowska - 2021 - Minds and Machines 31 (3):361-376.

Analytics

Added to PP
2011-01-29

Downloads
49 (#316,148)

6 months
12 (#305,852)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Mark Coeckelbergh
University of Vienna

References found in this work

No references found.

Add more references