Switch to: References

Add citations

You must login to add citations.
  1. In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining activities in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   41 citations  
  • Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Towards trustworthy blockchains: normative reflections on blockchain-enabled virtual institutions.Yan Teng - 2021 - Ethics and Information Technology 23 (3):385-397.
    This paper proposes a novel way to understand trust in blockchain technology by analogy with trust placed in institutions. In support of the analysis, a detailed investigation of institutional trust is provided, which is then used as the basis for understanding the nature and ethical limits of blockchain trust. Two interrelated arguments are presented. First, given blockchains’ capacity for being institution-like entities by inviting expectations similar to those invited by traditional institutions, blockchain trust is argued to be best conceptualized as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Computer Ethics as a Field of Applied Ethics.Herman T. Tavani - 2012 - Journal of Information Ethics 21 (2):52-70.
    The present essay includes an overview of key milestones in the development of computer ethics as a field of applied ethics. It also describes the ongoing debate about the proper scope of CE, as a subfield both in applied ethics and computer science. Following a brief description of the cluster of ethical issues that CE scholars and practitioners have generally considered to be the standard or "mainstream" issues comprising the field thus far, the essay speculates about the future direction of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Can we Develop Artificial Agents Capable of Making Good Moral Decisions?: Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, xi + 273 pp, ISBN: 978-0-19-537404-9.Herman T. Tavani - 2011 - Minds and Machines 21 (3):465-474.
  • Developing Automated Deceptions and the Impact on Trust.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2015 - Philosophy and Technology 28 (1):91-105.
    As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Social Media, Trust, and the Epistemology of Prejudice.Karen Frost-Arnold - 2016 - Social Epistemology 30 (5-6):513-531.
    Ignorance of one’s privileges and prejudices is an epistemic problem. While the sources of ignorance of privilege and prejudice are increasingly understood, less clarity exists about how to remedy ignorance. In fact, the various causes of ignorance can seem so powerful, various, and mutually reinforcing that studying the epistemology of ignorance can inspire pessimism about combatting socially constructed ignorance. I argue that this pessimism is unwarranted. The testimony of members of oppressed groups can often help members of privileged groups overcome (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • Trusting the (ro)botic other.Paul B. de Laat - 2015 - Acm Sigcas Computers and Society 45 (3):255-260.
    How may human agents come to trust artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Trust and ecological rationality in a computing context.Jeff Buechner - 2013 - Acm Sigcas Computers and Society 43 (1):47-68.
    In this paper, I examine a key issue affecting trust in the context of a computing environment, as it affects human agents and artificial agents. Specifically, the paper focuses on the role that "resource conservation" plays in an analysis of moral trust and epistemic trust involving agents. I will argue that resource conservation is a necessary condition in the definition of a moral trust relation, that there is a conceptual relationship between a moral trust relation and epistemic trust---that epistemic trust (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Crossing boundaries: ethics in interdisciplinary and intercultural relations: selected papers from the CEPE 2011 conference.Elizabeth A. Buchanan & Herman T. Tavani - 2013 - Acm Sigcas Computers and Society 43 (1):6-8.
    The Ninth International Conference on Computer Ethics: Philosophical Enquiry was held in Milwaukee, WI. Four papers originally presented at that conference are included in this issue of Computers and Society. The selected papers examine a wide range of information/computer-ethics-related issues, and taken together, they show great diversity in the field of information/computer ethics. We are continually negotiating with ethics, law, and policy in our technology-driven activities in the interconnected global arena. As we consider the themes within and among the papers (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark