Two Dimensions of Opacity and the Deep Learning Predicament

Minds and Machines 32 (1):43-75 (2021)
  Copy   BIBTEX

Abstract

Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models are successfully used in exploratory contexts, scientists face a whole new challenge in forming the concepts required for understanding underlying mechanisms.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 101,035

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2021-09-04

Downloads
172 (#135,555)

6 months
22 (#130,829)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Florian J. Boge
Bergische Universität Wuppertal

References found in this work

Depth: An Account of Scientific Explanation.Michael Strevens - 2008 - Cambridge: Harvard University Press.
Idealization and the Aims of Science.Angela Potochnik - 2017 - Chicago: University of Chicago Press.
Science in the age of computer simulation.Eric Winsberg - 2010 - Chicago: University of Chicago Press.
Scientific perspectivism.Ronald N. Giere - 2006 - Chicago: University of Chicago Press.

View all 67 references / Add more references