Explanation and trust: what to tell the user in security and AI? [Book Review]

Ethics and Information Technology 13 (1):53-64 (2011)
  Copy   BIBTEX

Abstract

There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, an important goal of explanation is to acquire or maintain the users’ trust. In this paper, I investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis, using elements from system theory as well as actor-network theory. I apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on expert systems (AI) and electronic voting systems (security). Finally, I discuss consequences of the analysis for ethics in terms of (un)informed consent and dissent, and the associated division of responsibilities

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,349

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2013-11-01

Downloads
75 (#216,283)

6 months
25 (#111,330)

Historical graph of downloads
How can I increase my downloads?

References found in this work

Reassembling the Social: An Introduction to the Actor-Network Theory.Bruno Latour - 2005 - Oxford, England and New York, NY, USA: Oxford University Press.
Trust in technological systems.Philip J. Nickel - 2013 - In M. J. de Vries, S. O. Hansson & A. W. M. Meijers (eds.), Norms in technology: Philosophy of Engineering and Technology, Vol. 9. Springer.
Vico and Herder: Two Studies in the History of Ideas.Isaiah Berlin - 1976 - Philosophy and Rhetoric 10 (4):276-280.

View all 17 references / Add more references