Limits of trust in medical AI

Journal of Medical Ethics 46 (7):478-481 (2020)
  Copy   BIBTEX

Abstract

Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied on, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely on AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.

Similar books and articles

Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics.

Analytics

Added to PP
2020-03-28

Downloads
984 (#1,112)

6 months
241 (#85,681)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Joshua Hatherley
Aarhus University

Citations of this work

Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
Medical AI: is trust really the issue?Jakob Thrane Mainz - 2024 - Journal of Medical Ethics 50 (5):349-350.
Ethics of generative AI.Hazem Zohny, John McMillan & Mike King - 2023 - Journal of Medical Ethics 49 (2):79-80.
The virtues of interpretable medical artificial intelligence.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.

View all 24 citations / Add more citations