Switch to: References

Add citations

You must login to add citations.
  1. Philosophy of education in a changing digital environment: an epistemological scope of the problem.Raigul Salimova, Jamilya Nurmanbetova, Maira Kozhamzharova, Mira Manassova & Saltanat Aubakirova - forthcoming - AI and Society:1-12.
    The relevance of this study's topic is supported by the argument that a philosophical understanding of the fundamental concepts of epistemology as they pertain to the educational process is crucial as the educational setting becomes increasingly digitalised. This paper aims to explore the epistemological component of the philosophy of learning in light of the educational process digitalisation. The research comprised a sample of 462 university students from Kazakhstan, with 227 participants assigned to the experimental and 235 to the control groups. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Towards trustworthy blockchains: normative reflections on blockchain-enabled virtual institutions.Yan Teng - 2021 - Ethics and Information Technology 23 (3):385-397.
    This paper proposes a novel way to understand trust in blockchain technology by analogy with trust placed in institutions. In support of the analysis, a detailed investigation of institutional trust is provided, which is then used as the basis for understanding the nature and ethical limits of blockchain trust. Two interrelated arguments are presented. First, given blockchains’ capacity for being institution-like entities by inviting expectations similar to those invited by traditional institutions, blockchain trust is argued to be best conceptualized as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust skeptics hold that talking about trust (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik van den Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; and (2) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • How do people judge the credibility of algorithmic sources?Donghee Shin - 2022 - AI and Society 37 (1):81-96.
    The exponential growth of algorithms has made establishing a trusted relationship between human and artificial intelligence increasingly important. Algorithm systems such as chatbots can play an important role in assessing a user’s credibility on algorithms. Unless users believe the chatbot’s information is credible, they are not likely to be willing to act on the recommendation. This study examines how literacy and user trust influence perceptions of chatbot information credibility. Results confirm that algorithmic literacy and users’ trust play a pivotal role (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is exercised by AI (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • How much do you trust me? A logico-mathematical analysis of the concept of the intensity of trust.Michele Loi, Andrea Ferrario & Eleonora Viganò - 2023 - Synthese 201 (6):1-30.
    Trust and monitoring are traditionally antithetical concepts. Describing trust as a property of a relationship of reliance, we introduce a theory of trust and monitoring, which uses mathematical models based on two classes of functions, including _q_-exponentials, and relates the levels of trust to the costs of monitoring. As opposed to several accounts of trust that attempt to identify the special ingredient of reliance and trust relationships, our theory characterizes trust as a quantitative property of certain relations of reliance that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Ethical Perceptions of AI in Hiring and Organizational Trust: The Role of Performance Expectancy and Social Influence.Maria Figueroa-Armijos, Brent B. Clark & Serge P. da Motta Veiga - 2023 - Journal of Business Ethics 186 (1):179-197.
    The use of artificial intelligence (AI) in hiring entails vast ethical challenges. As such, using an ethical lens to study this phenomenon is to better understand whether and how AI matters in hiring. In this paper, we examine whether ethical perceptions of using AI in the hiring process influence individuals’ trust in the organizations that use it. Building on the organizational trust model and the unified theory of acceptance and use of technology, we explore whether ethical perceptions are shaped by (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trust does not need to be human: it is possible to trust medical AI.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2021 - Journal of Medical Ethics 47 (6):437-438.
    In his recent article ‘Limits of trust in medical AI,’ Hatherley argues that, if we believe that the motivations that are usually recognised as relevant for interpersonal trust have to be applied to interactions between humans and medical artificial intelligence, then these systems do not appear to be the appropriate objects of trust. In this response, we argue that it is possible to discuss trust in medical artificial intelligence, if one refrains from simply assuming that trust describes human–human interactions. To (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Expanding Nallur's Landscape of Machine Implemented Ethics.William A. Bauer - 2020 - Science and Engineering Ethics 26 (5):2401-2410.
    What ethical principles should autonomous machines follow? How do we implement these principles, and how do we evaluate these implementations? These are some of the critical questions Vivek Nallur asks in his essay “Landscape of Machine Implemented Ethics (2020).” He provides a broad, insightful survey of answers to these questions, especially focused on the implementation question. In this commentary, I will first critically summarize the main themes and conclusions of Nallur’s essay and then expand upon the landscape that Nallur presents (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI as an Epistemic Technology.Ramón Alvarado - 2023 - Science and Engineering Ethics 29 (5):1-30.
    In this paper I argue that Artificial Intelligence and the many data science methods associated with it, such as machine learning and large language models, are first and foremost epistemic technologies. In order to establish this claim, I first argue that epistemic technologies can be conceptually and practically distinguished from other technologies in virtue of what they are designed for, what they do and how they do it. I then proceed to show that unlike other kinds of technology (_including_ other (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • (E)‐Trust and Its Function: Why We Shouldn't Apply Trust and Trustworthiness to Human–AI Relations.Pepijn Al - 2023 - Journal of Applied Philosophy 40 (1):95-108.
    With an increasing use of artificial intelligence (AI) systems, theorists have analyzed and argued for the promotion of trust in AI and trustworthy AI. Critics have objected that AI does not have the characteristics to be an appropriate subject for trust. However, this argumentation is open to counterarguments. Firstly, rejecting trust in AI denies the trust attitudes that some people experience. Secondly, we can trust other non‐human entities, such as animals and institutions, so why can we not trust AI systems? (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation