Switch to: References

Add citations

You must login to add citations.
  1. Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread agreement about the existence (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  • The molecular vista: current perspectives on molecules and life in the twentieth century.Mathias Grote, Lisa Onaga, Angela N. H. Creager, Soraya de Chadarevian, Daniel Liu, Gina Surita & Sarah E. Tracy - 2021 - History and Philosophy of the Life Sciences 43 (1):1-18.
    This essay considers how scholarly approaches to the development of molecular biology have too often narrowed the historical aperture to genes, overlooking the ways in which other objects and processes contributed to the molecularization of life. From structural and dynamic studies of biomolecules to cellular membranes and organelles to metabolism and nutrition, new work by historians, philosophers, and STS scholars of the life sciences has revitalized older issues, such as the relationship of life to matter, or of physicochemical inquiries to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Mechanistic Models and the Explanatory Limits of Machine Learning.Emanuele Ratti & Ezequiel López-Rubio - unknown
    We argue that mechanistic models elaborated by machine learning cannot be explanatory by discussing the relation between mechanistic models, explanation and the notion of intelligibility of models. We show that the ability of biologists to understand the model that they work with severely constrains their capacity of turning the model into an explanatory model. The more a mechanistic model is complex, the less explanatory it will be. Since machine learning increases its performances when more components are added, then it generates (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations