Results for 'Medical AI'

997 found
Order:
  1.  9
    Clinical internship environment and caring behaviours among nursing students: A moderated mediation model.Zhuo-er Huang, Xing Qiu, Ya-Qian Fu, Ai-di Zhang, Hui Huang, Jia Liu, Jin Yan & Qi-Feng Yi - forthcoming - Nursing Ethics.
    Background Caring behaviour is critical for nursing quality, and the clinical internship environment is a crucial setting for preparing nursing students for caring behaviours. Evidence about how to develop nursing students’ caring behaviour in the clinical environment is still emerging. However, the mechanism between the clinical internship environment and caring behaviour remains unclear, especially the mediating role of moral sensitivity and the moderating effect of self-efficacy. Research objective This study aimed to examine the mediating effect of moral sensitivity and the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  11
    Religious Perspectives on Precision Medicine in Singapore.Tamra Lysaght, Zhixia Tan, You Guang Shi, Swami Samachittananda, Sarabjeet Singh, Roland Chia, Raza Zaidi, Malminderjit Singh, Hung Yong Tay, Chitra Sankaran, Serene Ai Kiang Ong, Angela Ballantyne & Hui Jin Toh - 2021 - Asian Bioethics Review 13 (4):473-483.
    Precision medicine (PM) aims to revolutionise healthcare, but little is known about the role religion and spirituality might play in the ethical discourse about PM. This Perspective reports the outcomes of a knowledge exchange fora with religious authorities in Singapore about data sharing for PM. While the exchange did not identify any foundational religious objections to PM, ethical concerns were raised about the possibility for private industry to profiteer from social resources and the potential for genetic discrimination by private health (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. Medical AI: Is Trust Really the Issue?Jakob Thrane Mainz - forthcoming - Journal of Medical Ethics.
    I discuss an influential argument put forward by Joshua Hatherley. Drawing on influential philosophical accounts of inter-personal trust, Hatherley claims that medical Artificial Intelligence is capable of being reliable, but not trustworthy. Furthermore, Hatherley argues that trust generates moral obligations on behalf of the trustee. For instance, when a patient trusts a clinician, it generates certain moral obligations on behalf of the clinician for her to do what she is entrusted to do. I make three objections to Hatherley’s claims: (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4.  14
    Ethics of Medical AI.Giovanni Rubeis - 2024 - Springer Verlag.
    This is the first book to provide a coherent overview over the ethical implications of AI-related technologies in medicine. It explores how these technologies transform practices, relationships, and environments in the clinical field. It provides an introduction into ethical issues such as data security and privacy protection, bias and algorithmic fairness, trust and transparency, challenges to the doctor-patient relationship, and new perspectives for informed consent. The book focuses on the transformative impact that technology is having on medicine, and discusses several (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness.Jonathan Birch - forthcoming - Journal of Medical Ethics.
    Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is “cognitive-motor dissociation” (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram (EEG) recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a procedure (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6.  63
    Trustworthy medical AI systems need to know when they don’t know.Thomas Grote - forthcoming - Journal of Medical Ethics.
    There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence. In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which then need to be weighed by physicians. Hence, while the implementation of AI systems might give rise to ethical issues—for example, overtreatment, defensive medicine or paternalism2—the issue that lies at the heart is an epistemic problem: how can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7. Medical AI and human dignity: Contrasting perceptions of human and artificially intelligent (AI) decision making in diagnostic and medical resource allocation contexts.Paul Formosa, Wendy Rogers, Yannick Griep, Sarah Bankins & Deborah Richards - 2022 - Computers in Human Behaviour 133.
    Forms of Artificial Intelligence (AI) are already being deployed into clinical settings and research into its future healthcare uses is accelerating. Despite this trajectory, more research is needed regarding the impacts on patients of increasing AI decision making. In particular, the impersonal nature of AI means that its deployment in highly sensitive contexts-of-use, such as in healthcare, raises issues associated with patients’ perceptions of (un) dignified treatment. We explore this issue through an experimental vignette study comparing individuals’ perceptions of being (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  9.  26
    Randomized Controlled Trials in Medical AI.Konstantin Genin & Thomas Grote - 2021 - Philosophy of Medicine 2 (1).
    Various publications claim that medical AI systems perform as well, or better, than clinical experts. However, there have been very few controlled trials and the quality of existing studies has been called into question. There is growing concern that existing studies overestimate the clinical benefits of AI systems. This has led to calls for more, and higher-quality, randomized controlled trials of medical AI systems. While this a welcome development, AI RCTs raise novel methodological challenges that have seen little (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  10.  44
    Randomised controlled trials in medical AI: ethical considerations.Thomas Grote - 2022 - Journal of Medical Ethics 48 (11):899-906.
    In recent years, there has been a surge of high-profile publications on applications of artificial intelligence (AI) systems for medical diagnosis and prognosis. While AI provides various opportunities for medical practice, there is an emerging consensus that the existing studies show considerable deficits and are unable to establish the clinical benefit of AI systems. Hence, the view that the clinical benefit of AI systems needs to be studied in clinical trials—particularly randomised controlled trials (RCTs)—is gaining ground. However, an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  11.  81
    The Ethics of Medical AI and the Physician-Patient Relationship.Sally Dalton-Brown - 2020 - Cambridge Quarterly of Healthcare Ethics 29 (1):115-121.
    :This article considers recent ethical topics relating to medical AI. After a general discussion of recent medical AI innovations, and a more analytic look at related ethical issues such as data privacy, physician dependency on poorly understood AI helpware, bias in data used to create algorithms post-GDPR, and changes to the patient–physician relationship, the article examines the issue of so-called robot doctors. Whereas the so-called democratization of healthcare due to health wearables and increased access to medical information (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  12. Two Reasons for Subjecting Medical AI Systems to Lower Standards than Humans.Jakob Mainz, Jens Christian Bjerring & Lauritz Munch - 2023 - Acm Proceedings of Fairness, Accountability, and Transaparency (Facct) 2023 1 (1):44-49.
    This paper concerns the double standard debate in the ethics of AI literature. This debate essentially revolves around the question of whether we should subject AI systems to different normative standards than humans. So far, the debate has centered around the desideratum of transparency. That is, the debate has focused on whether AI systems must be more transparent than humans in their decision-making processes in order for it to be morally permissible to use such systems. Some have argued that the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13. The Virtues of Interpretable Medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  7
    Secondary Use of Health Data for Medical AI: A Cross-Regional Examination of Taiwan and the EU.Chih-Hsing Ho - forthcoming - Asian Bioethics Review:1-16.
    This paper conducts a comparative analysis of data governance mechanisms concerning the secondary use of health data in Taiwan and the European Union (EU). Both regions have adopted distinctive approaches and regulations for utilizing health data beyond primary care, encompassing areas such as medical research and healthcare system enhancement. Through an examination of these models, this study seeks to elucidate the strategies, frameworks, and legal structures employed by Taiwan and the EU to strike a delicate balance between the imperative (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  16. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5).
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  17. Computer knows best? The need for value-flexibility in medical AI.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (3):156-160.
    Artificial intelligence is increasingly being developed for use in medicine, including for diagnosis and in treatment decision making. The use of AI in medical treatment raises many ethical issues that are yet to be explored in depth by bioethicists. In this paper, I focus specifically on the relationship between the ethical ideal of shared decision making and AI systems that generate treatment recommendations, using the example of IBM’s Watson for Oncology. I argue that use of this type of system (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  18.  83
    Responsibility beyond design: Physicians’ requirements for ethical medical AI.Martin Sand, Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Bioethics 36 (2):162-169.
    Bioethics, Volume 36, Issue 2, Page 162-169, February 2022.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  19.  14
    Beyond ideals: why the (medical) AI industry needs to motivate behavioural change in line with fairness and transparency values, and how it can do it.Alice Liefgreen, Netta Weinstein, Sandra Wachter & Brent Mittelstadt - forthcoming - AI and Society:1-17.
    Artificial intelligence (AI) is increasingly relied upon by clinicians for making diagnostic and treatment decisions, playing an important role in imaging, diagnosis, risk analysis, lifestyle monitoring, and health information management. While research has identified biases in healthcare AI systems and proposed technical solutions to address these, we argue that effective solutions require human engagement. Furthermore, there is a lack of research on how to motivate the adoption of these solutions and promote investment in designing AI systems that align with values (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  25
    Embedded ethics: a proposal for integrating ethics into the development of medical AI.Alena Buyx, Sami Haddadin, Ruth Müller, Daniel Tigard, Amelia Fiske & Stuart McLennan - 2022 - BMC Medical Ethics 23 (1):1-10.
    The emergence of ethical concerns surrounding artificial intelligence (AI) has led to an explosion of high-level ethical principles being published by a wide range of public and private organizations. However, there is a need to consider how AI developers can be practically assisted to anticipate, identify and address ethical issues regarding AI technologies. This is particularly important in the development of AI intended for healthcare settings, where applications will often interact directly with patients in various states of vulnerability. In this (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  21.  65
    Trust does not need to be human: it is possible to trust medical AI.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2021 - Journal of Medical Ethics 47 (6):437-438.
    In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence, if one refrains from simply assuming that trust describes (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  22.  66
    Should we replace radiologists with deep learning? Pigeons, error and trust in medical AI.Ramón Alvarado - 2021 - Bioethics 36 (2):121-133.
    Bioethics, Volume 36, Issue 2, Page 121-133, February 2022.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  23.  39
    Relative explainability and double standards in medical decision-making: Should medical AI be subjected to higher standards in medical decision-making than doctors?Saskia K. Nagel, Jan-Christoph Heilinger & Hendrik Kempt - 2022 - Ethics and Information Technology 24 (2).
    The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  11
    Ethical and legal challenges of medical AI on informed consent: China as an example.Yue Wang & Zhuo Ma - forthcoming - Developing World Bioethics.
    The escalating integration of Artificial Intelligence (AI) in clinical settings carries profound implications for the doctrine of informed consent, presenting challenges that necessitate immediate attention. China, in its advancement in the deployment of medical AI, is proactively engaging in the formulation of legal and ethical regulations. This paper takes China as an example to undertake a theoretical examination rooted in the principles of medical ethics and legal norms, analyzing informed consent and medical AI through relevant literature data. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. Should we be afraid of medical AI?Ezio Di Nucci - 2019 - Journal of Medical Ethics 45 (8):556-558.
    I analyse an argument according to which medical artificial intelligence represents a threat to patient autonomy—recently put forward by Rosalind McDougall in the Journal of Medical Ethics. The argument takes the case of IBM Watson for Oncology to argue that such technologies risk disregarding the individual values and wishes of patients. I find three problems with this argument: it confuses AI with machine learning; it misses machine learning’s potential for personalised medicine through big data; it fails to distinguish (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  26.  59
    Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  27.  27
    Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28.  24
    Before and beyond trust: reliance in medical AI.Charalampia Kerasidou, Angeliki Kerasidou, Monika Buscher & Stephen Wilkinson - 2021 - Journal of Medical Ethics 48 (11):852-856.
    Artificial intelligence is changing healthcare and the practice of medicine as data-driven science and machine-learning technologies, in particular, are contributing to a variety of medical and clinical tasks. Such advancements have also raised many questions, especially about public trust. As a response to these concerns there has been a concentrated effort from public bodies, policy-makers and technology companies leading the way in AI to address what is identified as a "public trust deficit". This paper argues that a focus on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  29. A Tale of Two Deficits: Causality and Care in Medical AI.Melvin Chen - 2020 - Philosophy and Technology 33 (2):245-267.
    In this paper, two central questions will be addressed: ought we to implement medical AI technology in the medical domain? If yes, how ought we to implement this technology? I will critically engage with three options that exist with respect to these central questions: the Neo-Luddite option, the Assistive option, and the Substitutive option. I will first address key objections on behalf of the Neo-Luddite option: the Objection from Bias, the Objection from Artificial Autonomy, the Objection from Status (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  43
    Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  31.  43
    Generative AI, Specific Moral Values: A Closer Look at ChatGPT’s New Ethical Implications for Medical AI.Gavin Victor, Jean-Christophe Bélisle-Pipon & Vardit Ravitsky - 2023 - American Journal of Bioethics 23 (10):65-68.
    Cohen’s (2023) mapping exercise of possible bioethical issues emerging from the use of ChatGPT in medicine provides an informative, useful, and thought-provoking trigger for discussions of AI ethic...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  42
    No we shouldn’t be afraid of medical AI; it involves risks and opportunities.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (8):559-559.
    In contrast to Di Nucci’s characterisation, my argument is not a technoapocalyptic one. The view I put forward is that systems like IBM’s Watson for Oncology create both risks and opportunities from the perspective of shared decision-making. In this response, I address the issues that Di Nucci raises and highlight the importance of bioethicists engaging critically with these developing technologies.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33.  20
    Handle with care: Assessing performance measures of medical AI for shared clinical decision‐making.Sune Holm - 2021 - Bioethics 36 (2):178-186.
    In this article I consider two pertinent questions that practitioners must consider when they deploy an algorithmic system as support in clinical shared decision‐making. The first question concerns how to interpret and assess the significance of different performance measures for clinical decision‐making. The second question concerns the professional obligations that practitioners have to communicate information about the quality of an algorithm's output to patients in light of the principles of autonomy, beneficence, and justice. In the article I review the four (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  22
    Who's next? Shifting balances between medical AI, physicians and patients in shaping the future of medicine.Nils-Frederic Wagner, Mita Banerjee & Norbert W. Paul - 2022 - Bioethics 36 (2):111-112.
    Bioethics, Volume 36, Issue 2, Page 111-112, February 2022.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  35.  14
    Correction to: A Tale of Two Deficits: Causality and Care in Medical AI.Melvin Chen - 2019 - Philosophy and Technology 32 (4):769-770.
    The original version of this article unfortunately contains an unconverted data in footnotes 5, 9 and 13.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  27
    Generative AI and medical ethics: the state of play.Hazem Zohny, Sebastian Porsdam Mann, Brian D. Earp & John McMillan - 2024 - Journal of Medical Ethics 50 (2):75-76.
    Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. 1 Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  9
    Percentages and reasons: AI explainability and ultimate human responsibility within the medical field.Eva Winkler, Andreas Wabro & Markus Herrmann - 2024 - Ethics and Information Technology 26 (2).
    With regard to current debates on the ethical implementation of AI, especially two demands are linked: the call for explainability and for ultimate human responsibility. In the medical field, both are condensed into the role of one person: It is the physician to whom AI output should be explainable and who should thus bear ultimate responsibility for diagnostic or treatment decisions that are based on such AI output. In this article, we argue that a black box AI indeed creates (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  38. AI-Based Medical Solutions Can Threaten Physicians’ Ethical Obligations Only If Allowed to Do So.Benjamin Gregg - 2023 - American Journal of Bioethics 23 (9):84-86.
    Mildred Cho and Nicole Martinez-Martin (2023) distinguish between two of the ways in which humans can be represented in medical contexts. One is technical: a digital model of aspects of a person’s...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  39. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40. AI in Medical Practice.Karin Jongsma & Martin Sand - 2022 - In Ezio Di Nucci, Ji-Young Lee & Isaac A. Wagner (eds.), The Rowman & Littlefield Handbook of Bioethics. Lanham: Rowman & Littlefield Publishers.
     
    Export citation  
     
    Bookmark  
  41. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  6
    What’s wrong with medical black box AI?Bert Gordijn & Henk ten Have - 2023 - Medicine, Health Care and Philosophy 26 (3):283-284.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  16
    Evaluating the understanding of the ethical and moral challenges of Big Data and AI among Jordanian medical students, physicians in training, and senior practitioners: a cross-sectional study.Abdallah Al-Ani, Abdallah Rayyan, Ahmad Maswadeh, Hala Sultan, Ahmad Alhammouri, Hadeel Asfour, Tariq Alrawajih, Sarah Al Sharie, Fahed Al Karmi, Ahmad Azzam, Asem Mansour & Maysa Al-Hussaini - 2024 - BMC Medical Ethics 25 (1):1-14.
    Aims To examine the understanding of the ethical dilemmas associated with Big Data and artificial intelligence (AI) among Jordanian medical students, physicians in training, and senior practitioners. Methods We implemented a literature-validated questionnaire to examine the knowledge, attitudes, and practices of the target population during the period between April and August 2023. Themes of ethical debate included privacy breaches, consent, ownership, augmented biases, epistemology, and accountability. Participants’ responses were showcased using descriptive statistics and compared between groups using t-test or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  6
    Designing AI for mental health diagnosis: challenges from sub-Saharan African value-laden judgements on mental health disorders.Edmund Terem Ugar & Ntsumi Malele - forthcoming - Journal of Medical Ethics.
    Recently clinicians have become more reliant on technologies such as artificial intelligence (AI) and machine learning (ML) for effective and accurate diagnosis and prognosis of diseases, especially mental health disorders. These remarks, however, apply primarily to Europe, the USA, China and other technologically developed nations. Africa is yet to leverage the potential applications of AI and ML within the medical space. Sub-Saharan African countries are currently disadvantaged economically and infrastructure-wise. Yet precisely, these circumstances create significant opportunities for the deployment (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  27
    “I’m afraid I can’t let you do that, Doctor”: meaningful disagreements with AI in medical contexts.Hendrik Kempt, Jan-Christoph Heilinger & Saskia K. Nagel - forthcoming - AI and Society:1-8.
    This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  46.  20
    Why we should (not) worry about generative AI in medical ethics teaching.Seppe Segers - 2024 - International Journal of Ethics Education 9 (1):57-63.
    In this article I discuss the ethical ramifications for medical ethics training of the availability of large language models (LLMs) for medical students. My focus is on the practical ethical consequences for what we should expect of medical students in terms of medical professionalism and ethical reasoning, and how this can be tested in a context where LLMs are relatively easy available. If we continue to expect ethical competences of medical professionalism of future physicians, how (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47.  32
    Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.Michael Balas, Jordan Joseph Wadden, Philip C. Hébert, Eric Mathison, Marika D. Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A. Crawford, Parnian Arjmand & Edsel B. Ing - 2024 - Journal of Medical Ethics 50 (2):90-96.
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  49.  13
    Challenges and Controversies of Generative AI in Medical Diagnosis.Jordi Vallverdú - 2023 - Euphyía - Revista de Filosofía 17 (32):88-121.
    This paper provides a comprehensive exploration of the transformative role of generative AI models, specifically Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), in the realm of medical diagnosis. Drawing from the philosophy of medicine and epidemiology, the paper examines the technical, ethical, and philosophical dimensions of integrating generative models into healthcare. A case study featuring Emily underscores the pivotal support generative AI can offer in complex medical diagnoses. The discussion extends to the application of GANs and VAEs (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  34
    Robots, AI, and Assisted Dying: Ethical and Philosophical Considerations.Ryan Tonkens - 2015 - In Michael Cholbi & Jukka Varelius (eds.), New Directions in the Ethics of Assisted Suicide and Euthanasia. Cham: Springer Verlag. pp. 279-298.
    The focus of this chapter is on some of the ethical and philosophical issues at the intersection of robotics and artificial intelligence (AI) applications in the health care sector and medical assistance in dying (e.g. physician-assisted suicide and euthanasia), including: (1) Is there a role for robotic systems/AI to play in the orchestration or delivery of assisted dying?; (2) Can the use of robotic systems/AI make the orchestration of assisted dying more ethical?; and (3) What insights can be generated (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 997