Switch to: References

Add citations

You must login to add citations.
  1. Healthy Mistrust: Medical Black Box Algorithms, Epistemic Authority, and Preemptionism.Andreas Wolkenstein - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    In the ethics of algorithms, a specifically epistemological analysis is rarely undertaken in order to gain a critique (or a defense) of the handling of or trust in medical black box algorithms (BBAs). This article aims to begin to fill this research gap. Specifically, the thesis is examined according to which such algorithms are regarded as epistemic authorities (EAs) and that the results of a medical algorithm must completely replace other convictions that patients have (preemptionism). If this were true, it (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence in medicine: reshaping the face of medical practice.Max Tretter, David Samhammer & Peter Dabrock - 2023 - Ethik in der Medizin 36 (1):7-29.
    Background The use of Artificial Intelligence (AI) has the potential to provide relief in the challenging and often stressful clinical setting for physicians. So far, however, the actual changes in work for physicians remain a prediction for the future, including new demands on the social level of medical practice. Thus, the question of how the requirements for physicians will change due to the implementation of AI is addressed. Methods The question is approached through conceptual considerations based on the potentials that (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Ethics of generative AI.Hazem Zohny, John McMillan & Mike King - 2023 - Journal of Medical Ethics 49 (2):79-80.
    Artificial intelligence (AI) and its introduction into clinical pathways presents an array of ethical issues that are being discussed in the JME. 1–7 The development of AI technologies that can produce text that will pass plagiarism detectors 8 and are capable of appearing to be written by a human author 9 present new issues for medical ethics. One set of worries concerns authorship and whether it will now be possible to know that an author or student in fact produced submitted (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them.Filippo Santoni de Sio & Giulio Mecacci - 2021 - Philosophy and Technology 34 (4):1057-1084.
    The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  • The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory.Sabine Salloch & Nils B. Heyen - 2021 - BMC Medical Ethics 22 (1):1-9.
    BackgroundMachine learning-based clinical decision support systems (ML_CDSS) are increasingly employed in various sectors of health care aiming at supporting clinicians’ practice by matching the characteristics of individual patients with a computerised clinical knowledge base. Some studies even indicate that ML_CDSS may surpass physicians’ competencies regarding specific isolated tasks. From an ethical perspective, however, the usage of ML_CDSS in medical practice touches on a range of fundamental normative issues. This article aims to add to the ethical discussion by using professionalisation theory (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethical artificial intelligence framework for a good AI society: principles, opportunities and perils.Pradeep Paraman & Sanmugam Anamalah - 2023 - AI and Society 38 (2):595-611.
    The justification and rationality of this paper is to present some fundamental principles, theories, and concepts that we believe moulds the nucleus of a good artificial intelligence (AI) society. The morally accepted significance and utilitarian concerns that stems from the inception and realisation of an AI’s structural foundation are displayed in this study. This paper scrutinises the structural foundation, fundamentals, and cardinal righteous remonstrations, as well as the gaps in mechanisms towards novel prospects and perils in determining resilient fundamentals, accountability, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Algorithms for Ethical Decision-Making in the Clinic: A Proof of Concept.Lukas J. Meier, Alice Hein, Klaus Diepold & Alena Buyx - 2022 - American Journal of Bioethics 22 (7):4-20.
    Machine intelligence already helps medical staff with a number of tasks. Ethical decision-making, however, has not been handed over to computers. In this proof-of-concept study, we show how an algorithm based on Beauchamp and Childress’ prima-facie principles could be employed to advise on a range of moral dilemma situations that occur in medical institutions. We explain why we chose fuzzy cognitive maps to set up the advisory system and how we utilized machine learning to train it. We report on the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  • Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Hendrik Kempt & Saskia K. Nagel - 2022 - Journal of Medical Ethics 48 (4):222-229.
    In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • “I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts.Hendrik Kempt, Jan-Christoph Heilinger & Saskia K. Nagel - forthcoming - AI and Society:1-8.
    This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Perspectives of patients and clinicians on big data and AI in health: a comparative empirical investigation.Patrik Hummel, Matthias Braun, Serena Bischoff, David Samhammer, Katharina Seitz, Peter A. Fasching & Peter Dabrock - forthcoming - AI and Society:1-15.
    Background Big data and AI applications now play a major role in many health contexts. Much research has already been conducted on ethical and social challenges associated with these technologies. Likewise, there are already some studies that investigate empirically which values and attitudes play a role in connection with their design and implementation. What is still in its infancy, however, is the comparative investigation of the perspectives of different stakeholders. Methods To explore this issue in a multi-faceted manner, we conducted (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Meaningful Human Control over AI for Health? A Review.Eva Maria Hille, Patrik Hummel & Matthias Braun - forthcoming - Journal of Medical Ethics.
    Artificial intelligence is currently changing many areas of society. Especially in health, where critical decisions are made, questions of control must be renegotiated: who is in control when an automated system makes clinically relevant decisions? Increasingly, the concept of meaningful human control (MHC) is being invoked for this purpose. However, it is unclear exactly how this concept is to be understood in health. Through a systematic review, we present the current state of the concept of MHC in health. The results (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Hammer or Measuring Tape? Artificial Intelligence and Justice in Healthcare.Jan-Hendrik Heinrichs - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-12.
    Artificial intelligence (AI) is a powerful tool for several healthcare tasks. AI tools are suited to optimize predictive models in medicine. Ethical debates about AI’s extension of the predictive power of medical models suggest a need to adapt core principles of medical ethics. This article demonstrates that a popular interpretation of the principle of justice in healthcare needs amendment given the effect of AI on decision-making. The procedural approach to justice, exemplified with Norman Daniels and James Sabin’saccountability for reasonablenessconception, needs (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Randomised controlled trials in medical AI: ethical considerations.Thomas Grote - 2022 - Journal of Medical Ethics 48 (11):899-906.
    In recent years, there has been a surge of high-profile publications on applications of artificial intelligence (AI) systems for medical diagnosis and prognosis. While AI provides various opportunities for medical practice, there is an emerging consensus that the existing studies show considerable deficits and are unable to establish the clinical benefit of AI systems. Hence, the view that the clinical benefit of AI systems needs to be studied in clinical trials—particularly randomised controlled trials (RCTs)—is gaining ground. However, an issue that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Responsibility and decision-making authority in using clinical decision support systems: an empirical-ethical exploration of German prospective professionals preferences and concerns.Florian Funer, Wenke Liedtke, Sara Tinnemeyer, Andrea Diana Klausen, Diana Schneider, Helena U. Zacharias, Martin Langanke & Sabine Salloch - 2023 - Journal of Medical Ethics 50 (1):6-11.
    Machine learning-driven clinical decision support systems (ML-CDSSs) seem impressively promising for future routine and emergency care. However, reflection on their clinical implementation reveals a wide array of ethical challenges. The preferences, concerns and expectations of professional stakeholders remain largely unexplored. Empirical research, however, may help to clarify the conceptual debate and its aspects in terms of their relevance for clinical practice. This study explores, from an ethical point of view, future healthcare professionals’ attitudes to potential changes of responsibility and decision-making (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI-driven decision support systems and epistemic reliance: a qualitative study on obstetricians’ and midwives’ perspectives on integrating AI-driven CTG into clinical decision making.Rachel Dlugatch, Antoniya Georgieva & Angeliki Kerasidou - 2024 - BMC Medical Ethics 25 (1):1-11.
    Background Given that AI-driven decision support systems (AI-DSS) are intended to assist in medical decision making, it is essential that clinicians are willing to incorporate AI-DSS into their practice. This study takes as a case study the use of AI-driven cardiotography (CTG), a type of AI-DSS, in the context of intrapartum care. Focusing on the perspectives of obstetricians and midwives regarding the ethical and trust-related issues of incorporating AI-driven tools in their practice, this paper explores the conditions that AI-driven CTG (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Watson, autonomy and value flexibility: revisiting the debate.Jasper Debrabander & Heidi Mertes - 2022 - Journal of Medical Ethics 48 (12):1043-1047.
    Many ethical concerns have been voiced about Clinical Decision Support Systems (CDSSs). Special attention has been paid to the effect of CDSSs on autonomy, responsibility, fairness and transparency. This journal has featured a discussion between Rosalind McDougall and Ezio Di Nucci that focused on the impact of IBM’s Watson for Oncology (Watson) on autonomy. The present article elaborates on this discussion in three ways. First, using Jonathan Pugh’s account of rational autonomy we show that how Watson presents its results might (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Research on the Clinical Translation of Health Care Machine Learning: Ethicists Experiences on Lessons Learned.Jennifer Blumenthal-Barby, Benjamin Lang, Natalie Dorfman, Holland Kaplan, William B. Hooper & Kristin Kostick-Quenet - 2022 - American Journal of Bioethics 22 (5):1-3.
    The application of machine learning in health care holds great promise for improving care. Indeed, our own team is collaborating with experts in machine learning and statistical modeling to bu...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege.Susanne Beck, Michelle Faber & Simon Gerndt - 2023 - Ethik in der Medizin 35 (2):247-263.
    Zusammenfassung Die rasanten Entwicklungen im Bereich der Künstlichen Intelligenz und Robotik stellen nicht nur die Ethik, sondern auch das Recht vor neue Herausforderungen, gerade im Bereich der Medizin und Pflege. Grundsätzlich hat der Einsatz von KI dabei das Potenzial, sowohl die Heilbehandlungen als auch den adäquaten Umgang im Rahmen der Pflege zu erleichtern, wenn nicht sogar zu verbessern. Verwaltungsaufgaben, die Überwachung von Vitalfunktionen und deren Parameter sowie die Untersuchung von Gewebeproben etwa könnten autonom ablaufen. In Diagnostik und Therapie können Systeme (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Teasing out Artificial Intelligence in Medicine: An Ethical Critique of Artificial Intelligence and Machine Learning in Medicine.Mark Henderson Arnold - 2021 - Journal of Bioethical Inquiry 18 (1):121-139.
    The rapid adoption and implementation of artificial intelligence in medicine creates an ontologically distinct situation from prior care models. There are both potential advantages and disadvantages with such technology in advancing the interests of patients, with resultant ontological and epistemic concerns for physicians and patients relating to the instatiation of AI as a dependent, semi- or fully-autonomous agent in the encounter. The concept of libertarian paternalism potentially exercised by AI (and those who control it) has created challenges to conventional assessments (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Practical, epistemic and normative implications of algorithmic bias in healthcare artificial intelligence: a qualitative study of multidisciplinary expert perspectives.Yves Saint James Aquino, Stacy M. Carter, Nehmat Houssami, Annette Braunack-Mayer, Khin Than Win, Chris Degeling, Lei Wang & Wendy A. Rogers - forthcoming - Journal of Medical Ethics.
    BackgroundThere is a growing concern about artificial intelligence (AI) applications in healthcare that can disadvantage already under-represented and marginalised groups (eg, based on gender or race).ObjectivesOur objectives are to canvas the range of strategies stakeholders endorse in attempting to mitigate algorithmic bias, and to consider the ethical question of responsibility for algorithmic bias.MethodologyThe study involves in-depth, semistructured interviews with healthcare workers, screening programme managers, consumer health representatives, regulators, data scientists and developers.ResultsFindings reveal considerable divergent views on three key issues. First, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark