Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives

BMC Medical Ethics 25 (1):1-15 (2024)
  Copy   BIBTEX

Abstract

BackgroundThe emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD).AimExplore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD).MethodsSemi-structured, future scenario-based interviews were conducted among patients who had either an ICD and/or a heart condition with increased risk of SCD in Germany (n = 9) and the Netherlands (n = 15). We used the principles of the European Commission’s Ethics Guidelines for Trustworthy AI to structure the interviews.ResultsSix themes arose from the interviews: the ability of AI to rectify human doctors’ limitations; the objectivity of data; whether AI can serve as second opinion; AI explainability and patient trust; the importance of the ‘human touch’; and the personalization of care. Overall, our results reveal a strong desire among patients for more personalized and patient-centered care in the context of ICD implantation. Participants in our study express significant concerns about the further loss of the ‘human touch’ in healthcare when AI is introduced in clinical settings. They believe that this aspect of care is currently inadequately recognized in clinical practice. Participants attribute to doctors the responsibility of evaluating AI recommendations for clinical relevance and aligning them with patients’ individual contexts and values, in consultation with the patient.ConclusionThe ‘human touch’ patients exclusively ascribe to human medical practitioners extends beyond sympathy and kindness, and has clinical relevance in medical decision-making. Because this cannot be replaced by AI, we suggest that normative research into the ‘right to a human doctor’ is needed. Furthermore, policies on patient-centered AI integration in clinical practice should encompass the ethics of everyday practice rather than only principle-based ethics. We suggest that an empirical ethics approach grounded in ethnographic research is exceptionally well-suited to pave the way forward.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,503

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Ethical Machines?Ariela Tubert - 2018 - Seattle University Law Review 41 (4).
Intelligence, Artificial and Otherwise.Paul Dumouchel - 2019 - Forum Philosophicum: International Journal for Philosophy 24 (2):241-258.
Natural problems and artificial intelligence.Tracy B. Henley - 1990 - Behavior and Philosophy 18 (2):43-55.
Ethics of Artificial Intelligence.John-Stewart Gordon, and & Sven Nyholm - 2021 - Internet Encyclopedia of Philosophy.
Machine Ethics.Michael Anderson & Susan Leigh Anderson (eds.) - 2011 - Cambridge Univ. Press.

Analytics

Added to PP
2024-04-06

Downloads
17 (#861,334)

6 months
17 (#145,386)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Jeannette Pols
University of Amsterdam

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references