Conversational Artificial Intelligence in Psychotherapy: A New Therapeutic Tool or Agent?

American Journal of Bioethics 23 (5):4-13 (2022)
  Copy   BIBTEX

Abstract

Conversational artificial intelligence (CAI) presents many opportunities in the psychotherapeutic landscape—such as therapeutic support for people with mental health problems and without access to care. The adoption of CAI poses many risks that need in-depth ethical scrutiny. The objective of this paper is to complement current research on the ethics of AI for mental health by proposing a holistic, ethical, and epistemic analysis of CAI adoption. First, we focus on the question of whether CAI is rather a tool or an agent. This question serves as a framework for the subsequent ethical analysis of CAI focusing on topics of (self-) knowledge, (self-)understanding, and relationships. Second, we propose further conceptual and ethical analysis regarding human-AI interaction and argue that CAI cannot be considered as an equal partner in a conversation as is the case with a human therapist. Instead, CAI’s role in a conversation should be restricted to specific functions.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,990

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2022-04-02

Downloads
57 (#274,289)

6 months
32 (#123,204)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Jana Sedlakova
University of Zürich

References found in this work

Minds, brains, and programs.John Searle - 1980 - Behavioral and Brain Sciences 3 (3):417-57.
Reason in philosophy: animating ideas.Robert Brandom - 2009 - Cambridge, Mass.: Belknap Press of Harvard University Press.
The care perspective and autonomy.Marian A. Verkerk - 2001 - Medicine, Health Care and Philosophy 4 (3):289-294.

View all 7 references / Add more references