Transparency and the Black Box Problem: Why We Do Not Trust AI

Philosophy and Technology 34 (4):1607-1622 (2021)
  Copy   BIBTEX

Abstract

With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open question to what extent we can trust these systems. The question of trust becomes more urgent as we delegate more and more decision-making to and increasingly rely on AI to safeguard significant human goods, such as security, healthcare, and safety. Models that “open the black box” by making the non-linear and complex decision process understandable by human observers are promising solutions to the black box problem in AI but are limited, at least in their current state, in their ability to make these processes less opaque to most observers. A philosophical analysis of trust will show why transparency is a necessary condition for trust and eventually for judging AI to be trustworthy. A more fruitful route for establishing trust in AI is to acknowledge that AI is situated within a socio-technical system that mediates trust, and by increasing the trustworthiness of these systems, we thereby increase trust in AI.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,881

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The opportunities and challenges of blockchain in the fight against government corruption.Nikita Aggarwal & Luciano Floridi - 2018 - 19th General Activity Report (2018) of the Council of Europe Group of States Against Corruption (GRECO).
Transparent AI: reliabilist and proud.Abhishek Mishra - forthcoming - Journal of Medical Ethics.
The entanglement of trust and knowledge on the web.Judith Simon - 2010 - Ethics and Information Technology 12 (4):343-355.
A Taxonomy of Transparency in Science.Kevin C. Elliott - 2022 - Canadian Journal of Philosophy 52 (3):342-355.
Transparency rights, technology, and trust.John Elia - 2009 - Ethics and Information Technology 11 (2):145-153.
Transparency is Surveillance.C. Thi Nguyen - 2021 - Philosophy and Phenomenological Research 105 (2):331-361.

Analytics

Added to PP
2021-09-01

Downloads
308 (#65,904)

6 months
117 (#34,482)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Warren von Eschenbach
University of North Texas System

References found in this work

Trust and antitrust.Annette Baier - 1986 - Ethics 96 (2):231-260.
On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
The Threat of Algocracy: Reality, Resistance and Accommodation.John Danaher - 2016 - Philosophy and Technology 29 (3):245-268.
Trust as an affective attitude.Karen Jones - 1996 - Ethics 107 (1):4-25.

View all 29 references / Add more references