Three Strategies for Salvaging Epistemic Value in Deep Neural Network Modeling

Abstract

Some how-possibly explanations have epistemic value because they are epistemically possible; we cannot rule out their truth. One paradoxical implication of that proposal is that epistemic value may be obtained from mere ignorance. For the less we know, then the more is epistemically possible. This chapter examines a particular class of problematic epistemically possible how-possibly explanations, viz. *epistemically opaque* how-possibly explanations. Those are how-possibly explanations justified by an epistemically opaque process. How could epistemically opaque how-possibly explanations have epistemic value if they result from a process about which we lack knowledge or understanding? This chapter proposes three different strategies to salvage epistemic value from epistemic opacity, namely salvaging value from 1) functional transparency, 2) modal operator interpretation, and 3) pursuitworthiness. It illustrates using cases from deep neural network modeling.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Analytics

Added to PP
2023-04-24

Downloads
234 (#88,602)

6 months
120 (#41,423)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Philippe Verreault-Julien
Eindhoven University of Technology

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references