Switch to: References

Add citations

You must login to add citations.
  1. AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Can an AI-carebot be filial? Reflections from Confucian ethics.Kathryn Muyskens, Yonghui Ma & Michael Dunn - forthcoming - Nursing Ethics.
    This article discusses the application of artificially intelligent robots within eldercare and explores a series of ethical considerations, including the challenges that AI (Artificial Intelligence) technology poses to traditional Chinese Confucian filial piety. From the perspective of Confucian ethics, the paper argues that robots cannot adequately fulfill duties of care. Due to their detachment from personal relationships and interactions, the “emotions” of AI robots are merely performative reactions in different situations, rather than actual emotional abilities. No matter how “humanized” robots (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Healthy Mistrust: Medical Black Box Algorithms, Epistemic Authority, and Preemptionism.Andreas Wolkenstein - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    In the ethics of algorithms, a specifically epistemological analysis is rarely undertaken in order to gain a critique (or a defense) of the handling of or trust in medical black box algorithms (BBAs). This article aims to begin to fill this research gap. Specifically, the thesis is examined according to which such algorithms are regarded as epistemic authorities (EAs) and that the results of a medical algorithm must completely replace other convictions that patients have (preemptionism). If this were true, it (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The impact of artificial intelligence on jobs and work in New Zealand.James Maclaurin, Colin Gavaghan & Alistair Knott - 2021 - Wellington, New Zealand: New Zealand Law Foundation.
    Artificial Intelligence (AI) is a diverse technology. It is already having significant effects on many jobs and sectors of the economy and over the next ten to twenty years it will drive profound changes in the way New Zealanders live and work. Within the workplace AI will have three dominant effects. This report (funded by the New Zealand Law Foundation) addresses: Chapter 1 Defining the Technology of Interest; Chapter 2 The changing nature and value of work; Chapter 3 AI and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Machine learning in healthcare and the methodological priority of epistemology over ethics.Thomas Grote - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    This paper develops an account of how the implementation of ML models into healthcare settings requires revising the methodological apparatus of philosophical bioethics. On this account, ML models are cognitive interventions that provide decision-support to physicians and patients. Due to reliability issues, opaque reasoning processes, and information asymmetries, ML models pose inferential problems for them. These inferential problems lay the grounds for many ethical problems that currently claim centre-stage in the bioethical debate. Accordingly, this paper argues that the best way (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Ethics of generative AI.Hazem Zohny, John McMillan & Mike King - 2023 - Journal of Medical Ethics 49 (2):79-80.
    Artificial intelligence (AI) and its introduction into clinical pathways presents an array of ethical issues that are being discussed in the JME. 1–7 The development of AI technologies that can produce text that will pass plagiarism detectors 8 and are capable of appearing to be written by a human author 9 present new issues for medical ethics. One set of worries concerns authorship and whether it will now be possible to know that an author or student in fact produced submitted (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Somewhere between dystopia and utopia.Jesse Wall - 2020 - Journal of Medical Ethics 46 (3):161-162.
    The Journal of Medical Ethics can sometimes read part Men Like Gods and part A Brave New World. At times, we learn how all controversies can resolved with reference to four principles. At other times, we learn how “every discovery in pure science is potentially subversive”.1 This issue is no exception. Here, we can read about the utopia of gene editing, manufactured organs, and machine learnt algorithmic decision-making. We can also read about the dystopia of inherited disorders from edited germlines, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  • Concordance as evidence in the Watson for Oncology decision-support system.Aaro Tupasela & Ezio Di Nucci - 2020 - AI and Society 35 (4):811-818.
    Machine learning platforms have emerged as a new promissory technology that some argue will revolutionize work practices across a broad range of professions, including medical care. During the past few years, IBM has been testing its Watson for Oncology platform at several oncology departments around the world. Published reports, news stories, as well as our own empirical research show that in some cases, the levels of concordance over recommended treatment protocols between the platform and human oncologists have been quite low. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Individual benefits and collective challenges: Experts’ views on data-driven approaches in medical research and healthcare in the German context.Silke Schicktanz & Lorina Buhr - 2022 - Big Data and Society 9 (1).
    Healthcare provision, like many other sectors of society, is undergoing major changes due to the increased use of data-driven methods and technologies. This increased reliance on big data in medicine can lead to shifts in the norms that guide healthcare providers and patients. Continuous critical normative reflection is called for to track such potential changes. This article presents the results of an interview-based study with 20 German and Swiss experts from the fields of medicine, life science research, informatics and humanities (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  • AIgorithmic Ethics: A Technically Sweet Solution to a Non-Problem.Aurelia Sauerbrei, Nina Hallowell & Angeliki Kerasidou - 2022 - American Journal of Bioethics 22 (7):28-30.
    In their proof-of-concept study, Meier et al. built an algorithm to aid ethical decision making. In the limitations section of their paper, the authors state a frequently cited ax...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The ethics of machine learning-based clinical decision support: an analysis through the lens of professionalisation theory.Sabine Salloch & Nils B. Heyen - 2021 - BMC Medical Ethics 22 (1):1-9.
    BackgroundMachine learning-based clinical decision support systems (ML_CDSS) are increasingly employed in various sectors of health care aiming at supporting clinicians’ practice by matching the characteristics of individual patients with a computerised clinical knowledge base. Some studies even indicate that ML_CDSS may surpass physicians’ competencies regarding specific isolated tasks. From an ethical perspective, however, the usage of ML_CDSS in medical practice touches on a range of fundamental normative issues. This article aims to add to the ethical discussion by using professionalisation theory (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons.Sabine Salloch, Tim Kacprowski, Wolf-Tilo Balke, Frank Ursin & Lasse Benzinger - 2023 - BMC Medical Ethics 24 (1):1-9.
    BackgroundHealthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use.MethodsPubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Automating Justice: An Ethical Responsibility of Computational Bioethics.Vasiliki Rahimzadeh, Jonathan Lawson, Jinyoung Baek & Edward S. Dove - 2022 - American Journal of Bioethics 22 (7):30-33.
    In their proof-of-concept, Meier and colleagues describe the purpose and programming decisions underpinning Medical Ethics Advisor, an automated decision support system used t...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Testimonial injustice in medical machine learning.Giorgia Pozzi - 2023 - Journal of Medical Ethics 49 (8):536-540.
    Machine learning (ML) systems play an increasingly relevant role in medicine and healthcare. As their applications move ever closer to patient care and cure in clinical settings, ethical concerns about the responsibility of their use come to the fore. I analyse an aspect of responsible ML use that bears not only an ethical but also a significant epistemic dimension. I focus on ML systems’ role in mediating patient–physician relations. I thereby consider how ML systems may silence patients’ voices and relativise (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  • Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.Giorgia Pozzi - 2023 - Ethics and Information Technology 25 (1):1-12.
    Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Relative explainability and double standards in medical decision-making: Should medical AI be subjected to higher standards in medical decision-making than doctors?Saskia K. Nagel, Jan-Christoph Heilinger & Hendrik Kempt - 2022 - Ethics and Information Technology 24 (2).
    The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Justice and the Normative Standards of Explainability in Healthcare.Saskia K. Nagel, Nils Freyer & Hendrik Kempt - 2022 - Philosophy and Technology 35 (4):1-19.
    Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • No we shouldn’t be afraid of medical AI; it involves risks and opportunities.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (8):559-559.
    In contrast to Di Nucci’s characterisation, my argument is not a technoapocalyptic one. The view I put forward is that systems like IBM’s Watson for Oncology create both risks and opportunities from the perspective of shared decision-making. In this response, I address the issues that Di Nucci raises and highlight the importance of bioethicists engaging critically with these developing technologies.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Rethinking explainability: toward a postphenomenology of black-box artificial intelligence in medicine.Jay R. Malone, Jordan Mason & Annie B. Friedrich - 2022 - Ethics and Information Technology 24 (1).
    In recent years, increasingly advanced artificial intelligence (AI), and in particular machine learning, has shown great promise as a tool in various healthcare contexts. Yet as machine learning in medicine has become more useful and more widely adopted, concerns have arisen about the “black-box” nature of some of these AI models, or the inability to understand—and explain—the inner workings of the technology. Some critics argue that AI algorithms must be explainable to be responsibly used in the clinical encounter, while supporters (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Machine learning applications in healthcare and the role of informed consent: Ethical and practical considerations.Giorgia Lorenzini, David Martin Shaw, Laura Arbelaez Ossa & Bernice Simone Elger - forthcoming - Clinical Ethics:147775092210944.
    Informed consent is at the core of the clinical relationship. With the introduction of machine learning in healthcare, the role of informed consent is challenged. This paper addresses the issue of whether patients must be informed about medical ML applications and asked for consent. It aims to expose the discrepancy between ethical and practical considerations, while arguing that this polarization is a false dichotomy: in reality, ethics is applied to specific contexts and situations. Bridging this gap and considering the whole (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted with birth parents (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • “I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts.Hendrik Kempt, Jan-Christoph Heilinger & Saskia K. Nagel - forthcoming - AI and Society:1-8.
    This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Enabling Fairness in Healthcare Through Machine Learning.Geoff Keeling & Thomas Grote - 2022 - Ethics and Information Technology 24 (3):1-13.
    The use of machine learning systems for decision-support in healthcare may exacerbate health inequalities. However, recent work suggests that algorithms trained on sufficiently diverse datasets could in principle combat health inequalities. One concern about these algorithms is that their performance for patients in traditionally disadvantaged groups exceeds their performance for patients in traditionally advantaged groups. This renders the algorithmic decisions unfair relative to the standard fairness metrics in machine learning. In this paper, we defend the permissible use of affirmative algorithms; (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics parallel research: an approach for (early) ethical guidance of biomedical innovation.Karin R. Jongsma & Annelien L. Bredenoord - 2020 - BMC Medical Ethics 21 (1):1-9.
    BackgroundOur human societies and certainly also (bio) medicine are more and more permeated with technology. There seems to be an increasing awareness among bioethicists that an effective and comprehensive approach to ethically guide these emerging biomedical innovations into society is needed. Such an approach has not been spelled out yet for bioethics, while there are frequent calls for ethical guidance of biomedical innovation, also by biomedical researchers themselves. New and emerging biotechnologies require anticipation of possible effects and implications, meaning the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Meaningful Human Control over AI for Health? A Review.Eva Maria Hille, Patrik Hummel & Matthias Braun - forthcoming - Journal of Medical Ethics.
    Artificial intelligence is currently changing many areas of society. Especially in health, where critical decisions are made, questions of control must be renegotiated: who is in control when an automated system makes clinically relevant decisions? Increasingly, the concept of meaningful human control (MHC) is being invoked for this purpose. However, it is unclear exactly how this concept is to be understood in health. Through a systematic review, we present the current state of the concept of MHC in health. The results (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • On the Ethical and Epistemological Utility of Explicable AI in Medicine.Christian Herzog - 2022 - Philosophy and Technology 35 (2):1-31.
    In this article, I will argue in favor of both the ethical and epistemological utility of explanations in artificial intelligence -based medical technology. I will build on the notion of “explicability” due to Floridi, which considers both the intelligibility and accountability of AI systems to be important for truly delivering AI-powered services that strengthen autonomy, beneficence, and fairness. I maintain that explicable algorithms do, in fact, strengthen these ethical principles in medicine, e.g., in terms of direct patient–physician contact, as well (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models.Torbjørn Gundersen & Kristine Bærøe - 2022 - Science and Engineering Ethics 28 (2):1-16.
    This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and applied (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the ethics of algorithmic decision-making in healthcare.Thomas Grote & Philipp Berens - 2020 - Journal of Medical Ethics 46 (3):205-211.
    In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   53 citations  
  • On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and fairness in healthcare. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, I argue that (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5).
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  • AI-driven decision support systems and epistemic reliance: a qualitative study on obstetricians’ and midwives’ perspectives on integrating AI-driven CTG into clinical decision making.Rachel Dlugatch, Antoniya Georgieva & Angeliki Kerasidou - 2024 - BMC Medical Ethics 25 (1):1-11.
    Background Given that AI-driven decision support systems (AI-DSS) are intended to assist in medical decision making, it is essential that clinicians are willing to incorporate AI-DSS into their practice. This study takes as a case study the use of AI-driven cardiotography (CTG), a type of AI-DSS, in the context of intrapartum care. Focusing on the perspectives of obstetricians and midwives regarding the ethical and trust-related issues of incorporating AI-driven tools in their practice, this paper explores the conditions that AI-driven CTG (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Should we be afraid of medical AI?Ezio Di Nucci - 2019 - Journal of Medical Ethics 45 (8):556-558.
    I analyse an argument according to which medical artificial intelligence represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: it confuses AI with machine learning; it misses machine learning’s potential for personalised medicine through big data; it fails to distinguish between evidence-based (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Watson, autonomy and value flexibility: revisiting the debate.Jasper Debrabander & Heidi Mertes - 2022 - Journal of Medical Ethics 48 (12):1043-1047.
    Many ethical concerns have been voiced about Clinical Decision Support Systems (CDSSs). Special attention has been paid to the effect of CDSSs on autonomy, responsibility, fairness and transparency. This journal has featured a discussion between Rosalind McDougall and Ezio Di Nucci that focused on the impact of IBM’s Watson for Oncology (Watson) on autonomy. The present article elaborates on this discussion in three ways. First, using Jonathan Pugh’s account of rational autonomy we show that how Watson presents its results might (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Embedded ethics: a proposal for integrating ethics into the development of medical AI.Alena Buyx, Sami Haddadin, Ruth Müller, Daniel Tigard, Amelia Fiske & Stuart McLennan - 2022 - BMC Medical Ethics 23 (1):1-10.
    The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Primer on an ethics of AI-based decision support systems in the clinic.Matthias Braun, Patrik Hummel, Susanne Beck & Peter Dabrock - 2021 - Journal of Medical Ethics 47 (12):3-3.
    Making good decisions in extremely complex and difficult processes and situations has always been both a key task as well as a challenge in the clinic and has led to a large amount of clinical, legal and ethical routines, protocols and reflections in order to guarantee fair, participatory and up-to-date pathways for clinical decision-making. Nevertheless, the complexity of processes and physical phenomena, time as well as economic constraints and not least further endeavours as well as achievements in medicine and healthcare (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  • Artificial Intelligence and Patient-Centered Decision-Making.Jens Christian Bjerring & Jacob Busch - 2020 - Philosophy and Technology 34 (2):349-371.
    Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  • AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations