Moral appearances: emotions, robots, and human morality [Book Review]

Ethics and Information Technology 12 (3):235-241 (2010)
  Copy   BIBTEX

Abstract

Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full moral agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,168

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2010-03-22

Downloads
407 (#49,381)

6 months
21 (#127,497)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Mark Coeckelbergh
University of Vienna