Switch to: References

Add citations

You must login to add citations.
  1. Surrogate Perspectives on a Patient Preference Predictor: Good Idea, But I Should Decide How It Is Used.Dana Howard - 2022 - AJOB Empirical Bioethics 13 (2):125-135.
    Background: Current practice frequently fails to provide care consistent with the preferences of decisionally-incapacitated patients. It also imposes significant emotional burden on their surrogates. Algorithmic-based patient preference predictors (PPPs) have been proposed as a possible way to address these two concerns. While previous research found that patients strongly support the use of PPPs, the views of surrogates are unknown. The present study thus assessed the views of experienced surrogates regarding the possible use of PPPs as a means to help make (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The ethics of biomedical military research: Therapy, prevention, enhancement, and risk.Alexandre Erler & Vincent C. Müller - 2021 - In Daniel Messelken & David Winkler (eds.), Health Care in Contexts of Risk, Uncertainty, and Hybridity. Springer. pp. 235-252.
    What proper role should considerations of risk, particularly to research subjects, play when it comes to conducting research on human enhancement in the military context? We introduce the currently visible military enhancement techniques (1) and the standard discussion of risk for these (2), in particular what we refer to as the ‘Assumption’, which states that the demands for risk-avoidance are higher for enhancement than for therapy. We challenge the Assumption through the introduction of three categories of enhancements (3): therapeutic, preventive, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Health Care in Contexts of Risk, Uncertainty, and Hybridity.Daniel Messelken & David Winkler (eds.) - 2021 - Springer.
    This book sheds light on various ethical challenges military and humanitarian health care personnel face while working in adverse conditions. Contexts of armed conflict, hybrid wars or other forms of violence short of war, as well as natural disasters, all have in common that ordinary circumstances can no longer be taken for granted. Hence, the provision of health care has to adapt, for example, to a different level of risk, to scarce resources, or uncommon approaches due to external incentives or (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • A new method for making treatment decisions for incapacitated patients: what do patients think about the use of a patient preference predictor?David Wendler, Bob Wesley, Mark Pavlick & Annette Rid - 2016 - Journal of Medical Ethics 42 (4):235-241.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Patient preference predictors and the problem of naked statistical evidence.Nathaniel Paul Sharadin - 2018 - Journal of Medical Ethics 44 (12):857-862.
    Patient preference predictors (PPPs) promise to provide medical professionals with a new solution to the problem of making treatment decisions on behalf of incapacitated patients. I show that the use of PPPs faces a version of a normative problem familiar from legal scholarship: the problem of naked statistical evidence. I sketch two sorts of possible reply, vindicating and debunking, and suggest that our reply to the problem in the one domain ought to mirror our reply in the other. The conclusion (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons.Sabine Salloch, Tim Kacprowski, Wolf-Tilo Balke, Frank Ursin & Lasse Benzinger - 2023 - BMC Medical Ethics 24 (1):1-9.
    BackgroundHealthcare providers have to make ethically complex clinical decisions which may be a source of stress. Researchers have recently introduced Artificial Intelligence (AI)-based applications to assist in clinical ethical decision-making. However, the use of such tools is controversial. This review aims to provide a comprehensive overview of the reasons given in the academic literature for and against their use.MethodsPubMed, Web of Science, Philpapers.org and Google Scholar were searched for all relevant publications. The resulting set of publications was title and abstract (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Künstliche Intelligenz in der Ethik?Sabine Salloch - 2023 - Ethik in der Medizin 35 (3):337-340.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Will a Patient Preference Predictor Improve Treatment Decision Making for Incapacitated Patients?Annette Rid - 2014 - Journal of Medicine and Philosophy 39 (2):99-103.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Use of a Patient Preference Predictor to Help Make Medical Decisions for Incapacitated Patients.A. Rid & D. Wendler - 2014 - Journal of Medicine and Philosophy 39 (2):104-129.
    The standard approach to treatment decision making for incapacitated patients often fails to provide treatment consistent with the patient’s preferences and values and places significant stress on surrogate decision makers. These shortcomings provide compelling reason to search for methods to improve current practice. Shared decision making between surrogates and clinicians has important advantages, but it does not provide a way to determine patients’ treatment preferences. Hence, shared decision making leaves families with the stressful challenge of identifying the patient’s preferred treatment (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  • Toward a sociology of finitude: life, death, and the question of limits.Roi Livne - 2021 - Theory and Society 50 (6):891-934.
    Progressing beyond the given has been a key modern tendency. Yet modern societies are currently facing the problem of how to put limits on progress, expansion, and growth, live within them, and preserve (rather than transcend) the present. Drawing on economic sociology scholarship on valuation and morality in economic life, this article develops and applies the term economization to analyze the enactment of limits on progress. The question of end-of-life care—when to stop medical efforts to prolong life, postpone death, and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Improving Medical Decisions for Incapacitated Persons: Does Focusing on “Accurate Predictions” Lead to an Inaccurate Picture?Scott Y. H. Kim - 2014 - Journal of Medicine and Philosophy 39 (2):187-195.
    The Patient Preference Predictor (PPP) proposal places a high priority on the accuracy of predicting patients’ preferences and finds the performance of surrogates inadequate. However, the quest to develop a highly accurate, individualized statistical model has significant obstacles. First, it will be impossible to validate the PPP beyond the limit imposed by 60%–80% reliability of people’s preferences for future medical decisions—a figure no better than the known average accuracy of surrogates. Second, evidence supports the view that a sizable minority of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Patients’ Priorities for Surrogate Decision-Making: Possible Influence of Misinformed Beliefs.E. J. Jardas, Robert Wesley, Mark Pavlick, David Wendler & Annette Rid - 2022 - AJOB Empirical Bioethics 13 (3):137-151.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomy-based criticisms of the patient preference predictor.E. J. Jardas, David Wasserman & David Wendler - 2022 - Journal of Medical Ethics 48 (5):304-310.
    The patient preference predictor is a proposed computer-based algorithm that would predict the treatment preferences of decisionally incapacitated patients. Incorporation of a PPP into the decision-making process has the potential to improve implementation of the substituted judgement standard by providing more accurate predictions of patients’ treatment preferences than reliance on surrogates alone. Yet, critics argue that methods for making treatment decisions for incapacitated patients should be judged on a number of factors beyond simply providing them with the treatments they would (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Surrogate Perspectives on Patient Preference Predictors: Good Idea, but I Should Decide How They Are Used.Dana Howard, Allan Rivlin, Philip Candilis, Neal W. Dickert, Claire Drolen, Benjamin Krohmal, Mark Pavlick & David Wendler - 2022 - AJOB Empirical Bioethics 13 (2):125-135.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.Brian D. Earp, Sebastian Porsdam Mann, Jemima Allen, Sabine Salloch, Vynn Suren, Karin Jongsma, Matthias Braun, Dominic Wilkinson, Walter Sinnott-Armstrong, Annette Rid, David Wendler & Julian Savulescu - forthcoming - American Journal of Bioethics:1-14.
    When making substituted judgments for incapacitated patients, surrogates often struggle to guess what the patient would want if they had capacity. Surrogates may also agonize over having the (sole) responsibility of making such a determination. To address such concerns, a Patient Preference Predictor (PPP) has been proposed that would use an algorithm to infer the treatment preferences of individual patients from population-level data about the known preferences of people with similar demographic characteristics. However, critics have suggested that even if such (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Should Artificial Intelligence Augment Medical Decision Making? The Case for an Autonomy Algorithm.Camillo Lamanna - 2018 - AMA Journal of Ethics 9 (20):E902-910.
    A significant proportion of elderly and psychiatric patients do not have the capacity to make health care decisions. We suggest that machine learning technologies could be harnessed to integrate data mined from electronic health records (EHRs) and social media in order to estimate the confidence of the prediction that a patient would consent to a given treatment. We call this process, which takes data about patients as input and derives a confidence estimate for a particular patient’s predicted health care-related decision (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations