Guilty Artificial Minds: Folk Attributions of Mens Rea and Culpability to Artificially Intelligent Agents

Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2) (2021)
  Copy   BIBTEX

Abstract

While philosophers hold that it is patently absurd to blame robots or hold them morally responsible [1], a series of recent empirical studies suggest that people do ascribe blame to AI systems and robots in certain contexts [2]. This is disconcerting: Blame might be shifted from the owners, users or designers of AI systems to the systems themselves, leading to the diminished accountability of the responsible human agents [3]. In this paper, we explore one of the potential underlying reasons for robot blame, namely the folk's willingness to ascribe inculpating mental states or "mens rea" to robots. In a vignette-based experiment (N=513), we presented participants with a situation in which an agent knowingly runs the risk of bringing about substantial harm. We manipulated agent type (human v. group agent v. AI-driven robot) and outcome (neutral v. bad), and measured both moral judgment (wrongness of the action and blameworthiness of the agent) and mental states attributed to the agent (recklessness and the desire to inflict harm). We found that (i) judgments of wrongness and blame were relatively similar across agent types, possibly because (ii) attributions of mental states were, as suspected, similar across agent types. This raised the question - also explored in the experiment - whether people attribute knowledge and desire to robots in a merely metaphorical way (e.g., the robot "knew" rather than really knew). However, (iii), according to our data people were unwilling to downgrade to mens rea in a merely metaphorical sense when given the chance. Finally, (iv), we report a surprising and novel finding, which we call the inverse outcome effect on robot blame: People were less willing to blame artificial agents for bad outcomes than for neutral outcomes. This suggests that they are implicitly aware of the dangers of overattributing blame to robots when harm comes to pass, such as inappropriately letting the responsible human agent off the moral hook.

Similar books and articles

Guilty acts, guilty minds / c Stephen P. Garvey.Stephen P. Garvey - 2020 - New York: Oxford University Press.
The Nature and Significance of Culpability.David O. Brink - 2019 - Criminal Law and Philosophy 13 (2):347-373.
A Dilemma for Moral Deliberation in AI.Ryan Jenkins & Duncan Purves - 2016 - International Journal of Applied Philosophy 30 (2):313-335.
What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
Other Minds, Other Intelligences: The Problem of Attributing Agency to Machines.Sven Nyholm - 2019 - Cambridge Quarterly of Healthcare Ethics 28 (4):592-598.
A Dilemma for Moral Deliberation in AI.Ryan Jenkins & Duncan Purves - 2016 - International Journal of Applied Philosophy 30 (2):313-335.
Anti-natalism and the creation of artificial minds.Bartek Chomanski - forthcoming - Journal of Applied Philosophy.

Analytics

Added to PP
2023-09-22

Downloads
170 (#112,004)

6 months
140 (#24,320)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Markus Kneer
University of Graz
Michael T. Stuart
University of York

References found in this work

Freedom and Resentment.Peter Strawson - 1962 - Proceedings of the British Academy 48:187-211.
No luck for moral luck.Markus Kneer & Edouard Machery - 2019 - Cognition 182 (C):331-348.
Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
The force and fairness of blame.Pamela Hieronymi - 2004 - Philosophical Perspectives 18 (1):115–148.

View all 28 references / Add more references