10 found
Order:
  1.  66
    Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik van den Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  2.  40
    Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Bioethics, Volume 36, Issue 2, Page 154-161, February 2022.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  3.  52
    Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - 2024 - Cambridge Quarterly of Healthcare Ethics 33 (3):360-369.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  4.  42
    Towards a pragmatist dealing with algorithmic bias in medical machine learning.Georg Starke, Eva De Clercq & Bernice S. Elger - 2021 - Medicine, Health Care and Philosophy 24 (3):341-349.
    Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  5.  33
    “Waking up” the sleeping metaphor of normality in connection to intersex or DSD: a scoping review of medical literature.Eva De Clercq, Georg Starke & Michael Rost - 2022 - History and Philosophy of the Life Sciences 44 (4):1-37.
    The aim of the study is to encourage a critical debate on the use of normality in the medical literature on DSD or intersex. For this purpose, a scoping review was conducted to identify and map the various ways in which “normal” is used in the medical literature on DSD between 2016 and 2020. We identified 75 studies, many of which were case studies highlighting rare cases of DSD, others, mainly retrospective observational studies, focused on improving diagnosis or treatment. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  37
    Karl Jaspers and artificial neural nets: on the relation of explaining and understanding artificial intelligence in medicine.Christopher Poppe & Georg Starke - 2022 - Ethics and Information Technology 24 (3):1-10.
    Assistive systems based on Artificial Intelligence (AI) are bound to reshape decision-making in all areas of society. One of the most intricate challenges arising from their implementation in high-stakes environments such as medicine concerns their frequently unsatisfying levels of explainability, especially in the guise of the so-called black-box problem: highly successful models based on deep learning seem to be inherently opaque, resisting comprehensive explanations. This may explain why some scholars claim that research should focus on rendering AI systems understandable, rather (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  10
    Qualitative studies involving users of clinical neurotechnology: a scoping review.Georg Starke, Tugba Basaran Akmazoglu, Annalisa Colucci, Mareike Vermehren, Amanda van Beinum, Maria Buthut, Surjo R. Soekadar, Christoph Bublitz, Jennifer A. Chandler & Marcello Ienca - 2024 - BMC Medical Ethics 25 (1):1-14.
    Background The rise of a new generation of intelligent neuroprostheses, brain-computer interfaces (BCI) and adaptive closed-loop brain stimulation devices hastens the clinical deployment of neurotechnologies to treat neurological and neuropsychiatric disorders. However, it remains unclear how these nascent technologies may impact the subjective experience of their users. To inform this debate, it is crucial to have a solid understanding how more established current technologies already affect their users. In recent years, researchers have used qualitative research methods to explore the subjective (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  12
    Potentially Perilous Preference Parrots: Why Digital Twins Do Not Respect Patient Autonomy.Georg Starke & Ralf J. Jox - 2024 - American Journal of Bioethics 24 (7):43-45.
    The debate about the chances and dangers of a patient preference predictor (PPP) has been lively ever since Annette Rid and David Wendler proposed this fascinating idea ten years ago. Given the tec...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  17
    Playing Brains: The Ethical Challenges Posed by Silicon Sentience and Hybrid Intelligence in DishBrain.Stephen R. Milford, David Shaw & Georg Starke - 2023 - Science and Engineering Ethics 29 (6):1-17.
    The convergence of human and artificial intelligence is currently receiving considerable scholarly attention. Much debate about the resulting _Hybrid Minds_ focuses on the integration of artificial intelligence into the human brain through intelligent brain-computer interfaces as they enter clinical use. In this contribution we discuss a complementary development: the integration of a functional in vitro network of human neurons into an _in silico_ computing environment. To do so, we draw on a recent experiment reporting the creation of silico-biological intelligence as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  20
    Machine learning and its impact on psychiatric nosology: Findings from a qualitative study among German and Swiss experts.Georg Starke, Bernice Simone Elger & Eva De Clercq - 2023 - Philosophy and the Mind Sciences 4.
    The increasing integration of Machine Learning (ML) techniques into clinical care, driven in particular by Deep Learning (DL) using Artificial Neural Nets (ANNs), promises to reshape medical practice on various levels and across multiple medical fields. Much recent literature examines the ethical consequences of employing ML within medical and psychiatric practice but the potential impact on psychiatric diagnostic systems has so far not been well-developed. In this article, we aim to explore the challenges that arise from the recent use of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark