Trust in the Danger Zone: Individual Differences in Confidence in Robot Threat Assessments

Frontiers in Psychology 13 (2022)
  Copy   BIBTEX

Abstract

Effective human–robot teaming increasingly requires humans to work with intelligent, autonomous machines. However, novel features of intelligent autonomous systems such as social agency and incomprehensibility may influence the human’s trust in the machine. The human operator’s mental model for machine functioning is critical for trust. People may consider an intelligent machine partner as either an advanced tool or as a human-like teammate. This article reports a study that explored the role of individual differences in the mental model in a simulated environment. Multiple dispositional factors that may influence the dominant mental model were assessed. These included the Robot Threat Assessment, which measures the person’s propensity to apply tool and teammate models in security contexts. Participants were paired with an intelligent robot tasked with making threat assessments in an urban setting. A transparency manipulation was used to influence the dominant mental model. For half of the participants, threat assessment was described as physics-based ; the remainder received transparency information that described psychological cues. We expected that the physics-based transparency messages would guide the participant toward treating the robot as an advanced machine, while psychological messaging would encourage perceptions of the robot as acting like a human partner. We also manipulated situational danger cues present in the simulated environment. Participants rated their trust in the robot’s decision as well as threat and anxiety, for each of 24 urban scenes. They also completed the RoTA and additional individual-difference measures. Findings showed that trust assessments reflected the degree of congruence between the robot’s decision and situational danger cues, consistent with participants acting as Bayesian decision makers. Several scales, including the RoTA, were more predictive of trust when the robot was making psychology-based decisions, implying that trust reflected individual differences in the mental model of the robot as a teammate. These findings suggest scope for designing training that uncovers and mitigates the individual’s biases toward intelligent machines.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,471

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Intellectual Trust in Oneself and Others.Judith Baker - 2003 - Philosophical Review 112 (4):586-589.
On the impact of different types of errors on trust in human-robot interaction.Rebecca Flook, Anas Shrinah, Luc Wijnen, Kerstin Eder, Chris Melhuish & Séverin Lemaignan - 2019 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 20 (3):455-486.
Trust.Alphonso Lingis - 2004 - Univ of Minnesota Press.
Robotrust and Legal Responsibility.Ugo Pagallo - 2010 - Knowledge, Technology & Policy 23 (3):367-379.
The Nature of Epistemic Trust.Benjamin W. McCraw - 2015 - Social Epistemology 29 (4):413-430.
Robot Lies in Health Care: When Is Deception Morally Permissible?Andreas Matthias - 2015 - Kennedy Institute of Ethics Journal 25 (2):169-162.
Towards robots that trust.Alan R. Wagner & Paul Robinette - 2015 - Interaction Studies 16 (1):89-117.
Trust and Transforming Medical Institutions.Rosamond Rhodes & James J. Strain - 2000 - Cambridge Quarterly of Healthcare Ethics 9 (2):205-217.

Analytics

Added to PP
2022-04-09

Downloads
6 (#1,467,817)

6 months
2 (#1,206,551)

Historical graph of downloads
How can I increase my downloads?