Inductive Risk, Understanding, and Opaque Machine Learning Models

Philosophy of Science 89 (5):1065-1074 (2022)
  Copy   BIBTEX

Abstract

Under what conditions does machine learning (ML) model opacity inhibit the possibility of explaining and understanding phenomena? In this article, I argue that nonepistemic values give shape to the ML opacity problem even if we keep researcher interests fixed. Treating ML models as an instance of doing model-based science to explain and understand phenomena reveals that there is (i) an external opacity problem, where the presence of inductive risk imposes higher standards on externally validating models, and (ii) an internal opacity problem, where greater inductive risk demands a higher level of transparency regarding the inferences the model makes.

Similar books and articles

Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Inductive logic, verisimilitude, and machine learning.Ilkka Niiniluoto - 2005 - In Petr H’Ajek, Luis Vald’es-Villanueva & Dag Westerståhl (eds.), Logic, methodology and philosophy of science. London: College Publications. pp. 295/314.
Are Algorithms Value-Free?Gabbrielle M. Johnson - 2023 - Journal Moral Philosophy 21 (1-2):1-35.
Concept Representation Analysis in the Context of Human-Machine Interactions.Farshad Badie - 2016 - In 14th International Conference on e-Society. pp. 55-61.
Human Semi-Supervised Learning.Bryan R. Gibson, Timothy T. Rogers & Xiaojin Zhu - 2013 - Topics in Cognitive Science 5 (1):132-172.

Analytics

Added to PP
2022-04-24

Downloads
523 (#31,858)

6 months
168 (#14,776)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Emily Sullivan
Utrecht University

Citations of this work

Do ML models represent their targets?Emily Sullivan - forthcoming - Philosophy of Science.
On the Opacity of Deep Neural Networks.Anders Søgaard - forthcoming - Canadian Journal of Philosophy:1-16.
Predicting and explaining with machine learning models: Social science as a touchstone.Oliver Buchholz & Thomas Grote - 2023 - Studies in History and Philosophy of Science Part A 102 (C):60-69.
Epistemic Value of Digital Simulacra for Patients.Eleanor Gilmore-Szott - 2023 - American Journal of Bioethics 23 (9):63-66.
Health Digital Twins, Legal Liability, and Medical Practice.Andreas Kuersten - 2023 - American Journal of Bioethics 23 (9):66-69.

Add more citations

References found in this work

Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Transparency in Complex Computational Systems.Kathleen A. Creel - 2020 - Philosophy of Science 87 (4):568-589.
The Scientist Qua Scientist Makes Value Judgments.Richard Rudner - 1953 - Philosophy of Science 20 (1):1-6.
Inductive risk and values in science.Heather Douglas - 2000 - Philosophy of Science 67 (4):559-579.

View all 14 references / Add more references