Mechanistic Models and the Explanatory Limits of Machine Learning


We argue that mechanistic models elaborated by machine learning cannot be explanatory by discussing the relation between mechanistic models, explanation and the notion of intelligibility of models. We show that the ability of biologists to understand the model that they work with severely constrains their capacity of turning the model into an explanatory model. The more a mechanistic model is complex, the less explanatory it will be. Since machine learning increases its performances when more components are added, then it generates models which are not intelligible, and hence not explanatory.



    Upload a copy of this work     Papers currently archived: 91,088

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

When mechanistic models explain.Carl F. Craver - 2006 - Synthese 153 (3):355-376.
How could a rational analysis model explain?Samuli Reijula - 2017 - COGSCI 2017: 39th Annual Conference of the Cognitive Science Society,.
Plausibility versus richness in mechanistic models.Raoul Gervais & Erik Weber - 2013 - Philosophical Psychology 26 (1):139-152.
Models and mechanisms in network neuroscience.Carlos Zednik - 2018 - Philosophical Psychology 32 (1):23-51.


Added to PP

93 (#173,381)

6 months
6 (#252,172)

Historical graph of downloads
How can I increase my downloads?