Switch to: References

Add citations

You must login to add citations.
  1. The Four Fundamental Components for Intelligibility and Interpretability in AI Ethics.Moto Kamiura - forthcoming - American Philosophical Quarterly.
    Intelligibility and interpretability related to artificial intelligence (AI) are crucial for enabling explicability, which is vital for establishing constructive communication and agreement among various stakeholders, including users and designers of AI. It is essential to overcome the challenges of sharing an understanding of the details of the various structures of diverse AI systems, to facilitate effective communication and collaboration. In this paper, we propose four fundamental terms: “I/O,” “Constraints,” “Objectives,” and “Architecture.” These terms help mitigate the challenges associated with intelligibility (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons.Sabine Salloch, Tim Kacprowski, Wolf-Tilo Balke, Frank Ursin & Lasse Benzinger - 2023 - BMC Medical Ethics 24 (1):1-9.
    BackgroundHealthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use.MethodsPubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Bioethical Boundaries, Critiques of Current Paradigms, and the Importance of Transparency.J. Clint Parker - 2022 - Journal of Medicine and Philosophy 47 (1):1-17.
    This issue of The Journal of Medicine and Philosophy is dedicated to topics in clinical ethics with essays addressing clinician participation in state sponsored execution, duties to decrease ecological footprints in medicine, the concept of caring and its relationship to conscientious refusal, the dilemmas involved in dual use research, a philosophical and practical critique of principlism, conundrums that arise when applying surrogate decision-making models to patients with moderate intellectual disabilities, the phenomenology of chronic disease, and ethical concerns surrounding the use (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • AI in medicine: recommendations for social and humanitarian expertise.Е. В Брызгалина, А. Н Гумарова & Е. М Шкомова - 2023 - Siberian Journal of Philosophy 21 (1):51-63.
    The article presents specific recommendations for the examination of AI systems in medicine developed by the authors. The recommendations based on the problems, risks and limitations of the use of AI identified in scientific and philosophical publications of 2019-2022. It is proposed to carry out ethical expertise of projects of medical AI, by analogy with the review of projects of experimental activities in biomedicine; to conduct an ethical review of AI systems at the stage of preparation for their development followed (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Defending explicability as a principle for the ethics of artificial intelligence in medicine.Jonathan Adams - 2023 - Medicine, Health Care and Philosophy 26 (4):615-623.
    The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new ‘principle of explicability’ alongside the traditional four principles of bioethics that make up the theory of ‘principlism’. It specifically responds to a recent set of criticisms that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation