Computer Ethics - Philosophical Enquiry (CEPE) Proceedings (2019)
AbstractTrust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust A, H demands A to be self-assertive about confidence and able to self-correct its own models. In the Bayesian approach trust can be applied not only to humans, but to artificial agents (e.g. Machine Learning algorithms). We explain the advantage the metacognitive trust when compared to mainstream approaches and how it relates to virtue epistemology. The metacognitive ethics of trust is swiftly discussed.
Added to PP
Historical graph of downloads
References found in this work
No references found.
Citations of this work
No citations found.
Similar books and articles
Developing Artificial Agents Worthy of Trust: “Would You Buy a Used Car From This Artificial Agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
Modelling Trust in Artificial Agents, A First Step Toward the Analysis of E-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
Logic, Self-Awareness and Self-Improvement: The Metacognitive Loop Andthe Problem of Brittleness.Michael Anderson - manuscript
Artificial Moral Agents: Creative, Autonomous, Social. An Approach Based on Evolutionary Computation.Ioan Muntean & Don Howard - 2014 - In Johanna Seibt, Raul Hakli & Marco Nørskov (eds.), Frontiers in Artificial Intelligence and Applications.
A Metacognitive Model of the Feeling of Agency Over Bodily Actions.Glenn Carruthers - forthcoming - Psychology of Consciousness: Theory, Research and Practice.
Bio-Agency and the Possibility of Artificial Agents.Anne Sophie Meincke - 2018 - In Alexander Christian, David Hommen, Nina Retzlaff & Gerhard Schurz (eds.), Philosophy of Science - Between the Natural Sciences, the Social Sciences, and the Humanities. Selected Papers from the 2016 conference of the German Society of Philosophy of Science. Dordrecht, Netherlands: pp. 65-93.
Trust and Multi-Agent Systems: Applying the Diffuse, Default Model of Trust to Experiments Involving Artificial Agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
Manufacturing Morality A General Theory of Moral Agency Grounding Computational Implementations: The ACTWith Model.Jeffrey White - 2013 - In Floares (ed.), Computational Intelligence. Nova Publications. pp. 1-65.
Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to Be a Moral Agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
Trust and Will.Edward Hinchman - 2020 - In Judith Simon (ed.), Routledge Handbook on Trust and Philosophy. New York: Routledge.
What Is Epistemic Public Trust in Science?Gürol Irzik & Faik Kurtulmus - 2019 - British Journal for the Philosophy of Science 70 (4):1145-1166.