13 found
Order:
  1.  82
    In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  2.  67
    Trust does not need to be human: it is possible to trust medical AI.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2021 - Journal of Medical Ethics 47 (6):437-438.
    In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence, if one refrains from simply assuming that trust describes human–human interactions. To (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  3.  50
    AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  4.  50
    Transparency as design publicity: explaining and justifying inscrutable algorithms.Michele Loi, Andrea Ferrario & Eleonora Viganò - 2020 - Ethics and Information Technology 23 (3):253-263.
    In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new form of algorithmic transparency, that consists in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  5.  14
    Ethics of the algorithmic prediction of goal of care preferences: from theory to practice.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):165-174.
    Artificial intelligence (AI) systems are quickly gaining ground in healthcare and clinical decision-making. However, it is still unclear in what way AI can or should support decision-making that is based on incapacitated patients’ values and goals of care, which often requires input from clinicians and loved ones. Although the use of algorithms to predict patients’ most likely preferred treatment has been discussed in the medical ethics literature, no example has been realised in clinical practice. This is due, arguably, to the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  6.  20
    Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - forthcoming - Journal of Medical Ethics.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  18
    In Search of a Mission: Artificial Intelligence in Clinical Ethics.Nikola Biller-Andorno, Andrea Ferrario & Sophie Gloeckler - 2022 - American Journal of Bioethics 22 (7):23-25.
    Artificial intelligence has found its way into many areas of human life, serving a range of purposes. Sometimes AI tools are designed to help humans eliminate high-volume, tedious, routine tas...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  8.  31
    Experts or Authorities? The Strange Case of the Presumed Epistemic Superiority of Artificial Intelligence Systems.Andrea Ferrario, Alessandro Facchini & Alberto Termine - manuscript
    The high predictive accuracy of contemporary machine learning-based AI systems has led some scholars to argue that, in certain cases, we should grant them epistemic expertise and authority over humans. This approach suggests that humans would have the epistemic obligation of relying on the predictions of a highly accurate AI system. Contrary to this view, in this work we claim that it is not possible to endow AI systems with a genuine account of epistemic expertise. In fact, relying on accounts (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  68
    Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  10.  21
    AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):185-186.
    In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) worries about (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  27
    Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12.  30
    Justifying our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.Andrea Ferrario - manuscript
    We address an open problem in the epistemology of artificial intelligence (AI), namely, the justification of the epistemic attitudes we have towards the trustworthiness of AI systems. We start from a key consideration: the trustworthiness of an AI is a time-relative property of the system, with two distinct facets. One is the actual trustworthiness of the AI, and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, namely, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13.  16
    How much do you trust me? A logico-mathematical analysis of the concept of the intensity of trust.Michele Loi, Andrea Ferrario & Eleonora Viganò - 2023 - Synthese 201 (6):1-30.
    Trust and monitoring are traditionally antithetical concepts. Describing trust as a property of a relationship of reliance, we introduce a theory of trust and monitoring, which uses mathematical models based on two classes of functions, including _q_-exponentials, and relates the levels of trust to the costs of monitoring. As opposed to several accounts of trust that attempt to identify the special ingredient of reliance and trust relationships, our theory characterizes trust as a quantitative property of certain relations of reliance that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark