Relative explainability and double standards in medical decision-making: Should medical AI be subjected to higher standards in medical decision-making than doctors?

Ethics and Information Technology 24 (2) (2022)
  Copy   BIBTEX

Abstract

The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of explainability relative to other available norms of explainable decision-making. Third, in pointing out that we usually accept heuristics and uses of bounded rationality for medical decision-making by physicians, we argue that the explainability of medical decisions should not be measured against an idealized diagnostic process, but according to practical considerations. We conclude, fourth, to resolve the issue of explainability-standards by relocating the issue to the AI’s certifiability and interpretability.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,349

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The principle of double effect as a guide for medical decision-making.Georg Spielthenner - 2008 - Medicine, Health Care and Philosophy 11 (4):465-473.

Analytics

Added to PP
2022-04-08

Downloads
38 (#408,165)

6 months
6 (#522,885)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Jan-Christoph Heilinger
Ludwig Maximilians Universität, München