9 found
Order:
Disambiguations
Joshua Hatherley [5]Joshua James Hatherley [4]
See also
Joshua Hatherley
Aarhus University
  1. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  2. Is the exclusion of psychiatric patients from access to physician-assisted suicide discriminatory?Joshua James Hatherley - 2019 - Journal of Medical Ethics 45 (12):817-820.
    Advocates of physician-assisted suicide often argue that, although the provision of PAS is morally permissible for persons with terminal, somatic illnesses, it is impermissible for patients suffering from psychiatric conditions. This claim is justified on the basis that psychiatric illnesses have certain morally relevant characteristics and/or implications that distinguish them from their somatic counterparts. In this paper, I address three arguments of this sort. First, that psychiatric conditions compromise a person’s decision-making capacity. Second, that we cannot have sufficient certainty that (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  3. The promise and perils of AI in medicine.Robert Sparrow & Joshua James Hatherley - 2019 - International Journal of Chinese and Comparative Philosophy of Medicine 17 (2):79-109.
    What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  4. High hopes for “Deep Medicine”? AI, economics, and the future of care.Robert Sparrow & Joshua Hatherley - 2020 - Hastings Center Report 50 (1):14-17.
    In Deep Medicine, Eric Topol argues that the development of artificial intelligence (AI) for healthcare will lead to a dramatic shift in the culture and practice of medicine. Topol claims that, rather than replacing physicians, AI could function alongside of them in order to allow them to devote more of their time to face-to-face patient care. Unfortunately, these high hopes for AI-enhanced medicine fail to appreciate a number of factors that, we believe, suggest a radically different picture for the future (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  5. Generative AI entails a credit–blame asymmetry.Sebastian Porsdam Mann, Brian D. Earp, Sven Nyholm, John Danaher, Nikolaj Møller, Hilary Bowman-Smart, Joshua Hatherley, Julian Koplin, Monika Plozza, Daniel Rodger, Peter V. Treit, Gregory Renard, John McMillan & Julian Savulescu - 2023 - Nature Machine Intelligence 5 (5):472-475.
    Generative AI programs can produce high-quality written and visual content that may be used for good or ill. We argue that a credit–blame asymmetry arises for assigning responsibility for these outputs and discuss urgent ethical and policy implications focused on large-scale language models.
    Direct download  
     
    Export citation  
     
    Bookmark  
  6. The virtues of interpretable medical artificial intelligence.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  7. Diachronic and synchronic variation in the performance of adaptive machine learning systems: the ethical challenges.Joshua Hatherley & Robert Sparrow - 2023 - Journal of the American Medical Informatics Association 30 (2):361-366.
    Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for this tutorial are (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  8. The Virtues of Interpretable Medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Medical assistance in dying for the psychiatrically ill: Reply to Buturovic.Joshua James Hatherley - 2021 - Journal of Medical Ethics 47 (4):259-260.