Institutional Trust in Medicine in the Age of Artificial Intelligence

In Mark Alfano & David Collins (eds.), The Moral Psychology of Trust. Lexington Books (2023)
  Copy   BIBTEX

Abstract

It is easier to talk frankly to a person whom one trusts. It is also easier to agree with a scientist whom one trusts. Even though in both cases the psychological state that underlies the behavior is called ‘trust’, it is controversial whether it is a token of the same psychological type. Trust can serve an affective, epistemic, or other social function, and comes to interact with other psychological states in a variety of ways. The way that the functional role of trust changes across contexts and objects is further complicated when communities and individuals mediate it through technologies, and even more so when that mediation involves artificial intelligence (AI) and machine learning (ML). In this chapter I look at the ways in which trust in institutions, and specifically the medical profession, is affected by the use of AI and ML. There are two key elements of this analysis. The first is a disanalogy between institutional trust in medicine and institutional trust in science (Irzik and Kurtulmus 2021, 2019; Kitcher 2001). I note that as AI and ML become a more prominent part of medicine, trust in a medical institution becomes more like trust in a scientific institution. This is problematic for institutional trust in medicine and the practice of medicine, since institutional trust in science has been undermined by, among other things, the spread of misinformation online and the replication crisis (Romero 2019). There is also a strong analogy between the psychological state of the person who trusts a scientific report or testimony and the psychological state of a patient who trusts individual recommendations made by a medical professional in a clinical setting. In both cases, institutional trust makes it less likely that a mistake or malfeasance will result in reactive attitudes, such as blame or anger, directed at other individual members of that institution. However, it also renders people vulnerable enough to blame the institution itself. This, with time, can erode trust in the institution and naturally leads to policy recommendations that aim to preserve institutional trust. I survey two ways in which that can be done with institutional trust in medicine in the age of AI and ML.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,891

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Institutional Trust: A Less Demanding Form of Trust?Bernd Lahno - 2001 - Revista Latinoamericana de Estudios Avanzados 15:19-58.
Trust and ethics in AI.Hyesun Choung, Prabu David & Arun Ross - 2023 - AI and Society 38 (2):733-745.
Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.

Analytics

Added to PP
2022-06-19

Downloads
6 (#1,480,551)

6 months
5 (#837,449)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

Trust as an unquestioning attitude.C. Thi Nguyen - 2022 - Oxford Studies in Epistemology 7:214-244.
Knowledge on Trust.Paul Faulkner - 2011 - New York: Oxford University Press.
Experts: Which ones should you trust?Alvin I. Goldman - 2001 - Philosophy and Phenomenological Research 63 (1):85-110.

View all 10 references / Add more references