Robots As Intentional Agents: Using Neuroscientific Methods to Make Robots Appear More Social

Frontiers in Psychology 8:281017 (2017)
  Copy   BIBTEX

Abstract

Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to inter-act with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user’s needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate this, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye tracking, electroencephalography (EEG), or functional near-infrared spectroscopy (fNIRS) embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive mechanisms involved in human-human interactions, and high-light the importance of perceiving others as intentional agents to activate these brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,853

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?Sven Nyholm & Lily Frank - 2017 - In John Danaher & Neil McArthur (eds.), Robot Sex: Social and Ethical Implications. Cambridge, MA: MIT Press. pp. 219-244.
Can we trust robots?Mark Coeckelbergh - 2012 - Ethics and Information Technology 14 (1):53-60.
On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
Moral appearances: emotions, robots, and human morality. [REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
What should we want from a robot ethic.Peter M. Asaro - 2006 - International Review of Information Ethics 6 (12):9-16.
Robots, Trust and War.Thomas W. Simpson - 2011 - Philosophy and Technology 24 (3):325-337.

Analytics

Added to PP
2017-10-25

Downloads
82 (#204,483)

6 months
56 (#82,121)

Historical graph of downloads
How can I increase my downloads?