Switch to: References

Add citations

You must login to add citations.
  1. Descriptive multiscale modeling in data-driven neuroscience.Philipp Haueis - 2022 - Synthese 200 (2):1-26.
    Multiscale modeling techniques have attracted increasing attention by philosophers of science, but the resulting discussions have almost exclusively focused on issues surrounding explanation (e.g., reduction and emergence). In this paper, I argue that besides explanation, multiscale techniques can serve important exploratory functions when scientists model systems whose organization at different scales is ill-understood. My account distinguishes explanatory and descriptive multiscale modeling based on which epistemic goal scientists aim to achieve when using multiscale techniques. In explanatory multiscale modeling, scientists use multiscale (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Positive Argument Against Scientific Realism.Florian J. Boge - 2023 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 54 (4):535-566.
    Putnam coined what is now known as the no miracles argument “[t]he positive argument for realism”. In its opposition, he put an argument that by his own standards counts as negative. But are there no positive arguments against scientific realism? I believe that there is such an argument that has figured in the back of much of the realism-debate, but, to my knowledge, has nowhere been stated and defended explicitly. This is an argument from the success of quantum physics to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Two Dimensions of Opacity and the Deep Learning Predicament.Florian J. Boge - 2021 - Minds and Machines 32 (1):43-75.
    Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations