Measuring perceived empathy in dialogue systems

AI and Society:1-15 (forthcoming)
  Copy   BIBTEX

Abstract

Dialogue systems, from Virtual Personal Assistants such as Siri, Cortana, and Alexa to state-of-the-art systems such as BlenderBot3 and ChatGPT, are already widely available, used in a variety of applications, and are increasingly part of many people’s lives. However, the task of enabling them to use empathetic language more convincingly is still an emerging research topic. Such systems generally make use of complex neural networks to learn the patterns of typical human language use, and the interactions in which the systems participate are usually mediated either via interactive text-based or speech-based interfaces. In human–human interaction, empathy has been shown to promote prosocial behaviour and improve interaction. In the context of dialogue systems, to advance the understanding of how perceptions of empathy affect interactions, it is necessary to bring greater clarity to how empathy is measured and assessed. Assessing the way dialogue systems create perceptions of empathy brings together a range of technological, psychological, and ethical considerations that merit greater scrutiny than they have received so far. However, there is currently no widely accepted evaluation method for determining the degree of empathy that any given system possesses (or, at least, appears to possess). Currently, different research teams use a variety of automated metrics, alongside different forms of subjective human assessment such as questionnaires, self-assessment measures and narrative engagement scales. This diversity of evaluation practice means that, given two DSs, it is usually impossible to determine which of them conveys the greater degree of empathy in its dialogic exchanges with human users. Acknowledging this problem, the present article provides an overview of how empathy is measured in human–human interactions and considers some of the ways it is currently measured in human–DS interactions. Finally, it introduces a novel third-person analytical framework, called the Empathy Scale for Human–Computer Communication (ESHCC), to support greater uniformity in how perceived empathy is measured during interactions with state-of-the-art DSs.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,611

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Time to re-humanize algorithmic systems.Minna Ruckenstein - 2023 - AI and Society 38 (3):1241-1242.
Call for papers.[author unknown] - 2018 - AI and Society 33 (3):453-455.
AI and consciousness.Sam S. Rakover - forthcoming - AI and Society:1-2.
Call for papers.[author unknown] - 2018 - AI and Society 33 (3):457-458.
Is LaMDA sentient?Max Griffiths - forthcoming - AI and Society:1-2.
The inside out mirror.Sue Pearson - 2021 - AI and Society 36 (3):1069-1070.
A Literature of Working Life.R. Ennals - 2002 - AI and Society 16 (1-2):168-170.
Review of Reality+. [REVIEW]Miloš Agatonović - forthcoming - AI and Society:1-2.
The emergence and evolution of urban AI.Michael Batty - 2023 - AI and Society 38 (3):1045-1048.
The Turing test is a joke.Attay Kremer - 2024 - AI and Society 39 (1):399-401.

Analytics

Added to PP
2023-07-25

Downloads
23 (#687,700)

6 months
12 (#223,952)

Historical graph of downloads
How can I increase my downloads?