Sources of Understanding in Supervised Machine Learning Models

Philosophy and Technology 35 (2):1-19 (2022)
  Copy   BIBTEX

Abstract

In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. More popular, however, is the idea that understanding can come from a careful analysis of the dataset or from the model’s theoretical basis. In this paper, I wish to examine the possible forms of obtaining understanding of such non-interpretable models. Two main strategies for providing understanding are analyzed. The first involves understanding without interpretability, either through external evidence for the model’s inner functioning or through analyzing the data. The second is based on the artificial production of interpretable structures, through three main forms: post hoc models, hybrid models, and quasi-interpretable structures. Finally, I consider some of the conceptual difficulties in the attempt to create explanations for these models, and their implications for understanding.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,440

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Human Semi-Supervised Learning.Bryan R. Gibson, Timothy T. Rogers & Xiaojin Zhu - 2013 - Topics in Cognitive Science 5 (1):132-172.
Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.

Analytics

Added to PP
2022-03-31

Downloads
17 (#874,906)

6 months
5 (#649,106)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

References found in this work

Depth: An Account of Scientific Explanation.Michael Strevens - 2008 - Cambridge, Mass.: Harvard University Press.
The Scientific Image.William Demopoulos & Bas C. van Fraassen - 1982 - Philosophical Review 91 (4):603.
Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.

View all 25 references / Add more references