Can a robot lie?

Abstract

The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three questions: (i) Are ordinary people willing to ascribe intentions to deceive to artificial agents? (ii) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (iii) Do they blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than it presently attracts.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
There is no 'I' in 'Robot': Robots and Utilitarianism (expanded & revised).Christopher Grau - 2011 - In Susan Anderson & Michael Anderson (eds.), Machine Ethics. Cambridge University Press. pp. 451.
Humans, Animals, and Robots.Mark Coeckelbergh - 2011 - International Journal of Social Robotics 3 (2):197-204.
Should we welcome robot teachers?Amanda J. C. Sharkey - 2016 - Ethics and Information Technology 18 (4):283-297.
Robot Betrayal: a guide to the ethics of robotic deception.John Danaher - 2020 - Ethics and Information Technology 22 (2):117-128.
Companion robots: the hallucinatory danger of human-robot interactions.Piercosma Bisconti & Daniele Nardi - 2018 - In AIES '18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. pp. 17-22.
Robot teachers: The very idea!Amanda Sharkey - 2015 - Behavioral and Brain Sciences 38.
Robot Lies in Health Care: When Is Deception Morally Permissible?Andreas Matthias - 2015 - Kennedy Institute of Ethics Journal 25 (2):169-162.
Cooperative gazing behaviors in human multi-robot interaction.Tian Xu, Hui Zhang & Chen Yu - 2013 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 14 (3):390-418.
Are robots like people?Sarah Woods, Kerstin Dautenhahn, Christina Kaouri, René te Boekhorst, Kheng Lee Koay & Michael L. Walters - 2007 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 8 (2):281-305.

Analytics

Added to PP
2020-10-27

Downloads
440 (#33,928)

6 months
72 (#37,157)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Markus Kneer
University of Zürich

Citations of this work

Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
Playing the Blame Game with Robots.Markus Kneer & Michael T. Stuart - 2021 - In Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction (HRI’21 Companion). New York, NY, USA:
What Might Machines Mean?Mitchell Green & Jan G. Michel - 2022 - Minds and Machines 32 (2):323-338.

Add more citations

References found in this work

Intention.Gertrude Elizabeth Margaret Anscombe - 1957 - Ithaca, N.Y.,: Cornell University Press.
Intention.G. E. M. Anscombe - 1957 - Proceedings of the Aristotelian Society 57:321-332.

View all 26 references / Add more references