The selective deployment of AI in healthcare

Bioethics 38 (5):391-400 (2024)
  Copy   BIBTEX

Abstract

Machine‐learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well‐represented populations. Faced with this dilemma between equity and utility, we draw on two case studies involving breast cancer and melanoma to argue for the selective deployment of diagnostic and prognostic tools for some well‐represented groups, even if this results in the temporary exclusion of underrepresented patients from algorithmic approaches. We argue that this approach is justifiable when the inclusion of underrepresented patients would cause them to be harmed. While the context of historic injustice poses a considerable challenge for the ethical acceptability of selective algorithmic deployment strategies, we argue that, at least for the case studies addressed in this article, the issue of historic injustice is better addressed through nonalgorithmic measures, including being transparent with patients about the nature of the current epistemic deficits, providing additional services to algorithmically excluded populations, and through urgent commitments to gather additional algorithmic training data from excluded populations, paving the way for universal algorithmic deployment that is accurate for all patient groups. These commitments should be supported by regulation and, where necessary, government funding to ensure that any delays for excluded groups are kept to the minimum. We offer an ethical algorithm for algorithms—showing when to ethically delay, expedite, or selectively deploy algorithmic systems in healthcare settings.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,440

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Machine Learning.Paul Thagard - 2017 - In William Bechtel & George Graham (eds.), A Companion to Cognitive Science. Oxford, UK: Blackwell. pp. 245–249.
On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
Philosophy and machine learning.Paul Thagard - 1990 - Canadian Journal of Philosophy 20 (2):261-76.
Traditional learning theories, process philosophy, and AI.Katie Anderson & Vesselin Petrov (eds.) - 2019 - [Brussels]: Les Éditions Chromatika.

Analytics

Added to PP
2024-04-01

Downloads
17 (#874,906)

6 months
17 (#154,170)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references