Deep neural networks have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’, I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between discovery and explanation: When unsupervised models (...) are successfully used in exploratory contexts, scientists face a whole new challenge in forming the concepts required for understanding underlying mechanisms. (shrink)
In this paper, we pursue three general aims: (I) We will define a notion of fundamental opacity and ask whether it can be found in High Energy Physics (HEP), given the involvement of machine learning (ML) and computer simulations (CS) therein. (II) We identify two kinds of non-fundamental, contingent opacity associated with CS and ML in HEP respectively, and ask whether, and if so how, they may be overcome. (III) We address the question of whether any kind of opacity, contingent (...) or fundamental, is unique to ML or CS, or whether they stand in continuity to kinds of opacity associated with other scientific research. (shrink)
AbstractThe question of where, between theory and experiment, computer simulations (CSs) locate on the methodological map is one of the central questions in the epistemology of simulation (cf. Saam Journal for General Philosophy of Science, 48, 293–309, 2017). The two extremes on the map have them either be a kind of experiment in their own right (e.g. Barberousse et al. Synthese, 169, 557–574, 2009; Morgan 2002, 2003, Journal of Economic Methodology, 12(2), 317–329, 2005; Morrison Philosophical Studies, 143, 33–57, 2009; Morrison (...) 2015; Massimi and Bhimji Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 51, 71–81, 2015; Parker Synthese, 169, 483–496, 2009) or just an argument executed with the aid of a computer (e.g. Beisbart European Journal for Philosophy of Science, 2, 395–434, 2012; Beisbart and Norton International Studies in the Philosophy of Science, 26, 403–422, 2012). There exist multiple versions of the first kind of position, whereas the latter is rather unified. I will argue that, while many claims about the ‘experimental’ status of CSs seem unjustified, there is a variant of the first position that seems preferable. In particular I will argue that while CSs respect the logic of (deductively valid) arguments, they neither agree with their pragmatics nor their epistemology. I will then lay out in what sense CSs can fruitfully be seen as experiments, and what features set them apart from traditional experiments nonetheless. I conclude that they should be seen as surrogate experiments, i.e. experiments executed consciously on the wrong kind of system, but with an exploitable connection to the system of interest. Finally, I contrast my view with that of Beisbart (European Journal for Philosophy of Science, 8, 171–204, 2018), according to which CSs are surrogates for experiments, arguing that this introduces an arbitrary split between CSs and other kinds of simulations. (shrink)
Despite remarkable efforts, it remains notoriously difficult to equip quantum theory with a coherent ontology. Hence, Healey (2017, 12) has recently suggested that ‘‘quantum theory has no physical ontology and states no facts about physical objects or events’’, and Fuchs et al. (2014, 752) similarly hold that ‘‘quantum mechanics itself does not deal directly with the objective world’’. While intriguing, these positions either raise the question of how talk of ‘physical reality’ can even remain meaningful, or they must ultimately embrace (...) a hidden variables-view, in tension with their original project. I here offer a neo-Kantian alternative. In particular, I will show how constitutive elements in the sense of Reichenbach (1920) and Friedman (1999, 2001) can be identified within quantum theory, through considerations of symmetries that allow the constitution of a ‘quantum reality’, without invoking any notion of a radically mind-independent reality. The resulting conception will inherit elements from pragmatist and ‘QBist’ approaches, but also differ from them in crucial respects. Furthermore, going beyond the Friedmanian program, I will show how non-fundamental and approximate symmetries can be relevant for identifying constitutive principles. (shrink)
This book explores the prospects of rivaling ontological and epistemic interpretations of quantum mechanics (QM). It concludes with a suggestion for how to interpret QM from an epistemological point of view and with a Kantian touch. It thus refines, extends, and combines existing approaches in a similar direction. -/- The author first looks at current, hotly debated ontological interpretations. These include hidden variables-approaches, Bohmian mechanics, collapse interpretations, and the many worlds interpretation. He demonstrates why none of these ontological interpretations can (...) claim to be the clear winner amongst its rivals. Next, coverage explores the possibility of interpreting QM in terms of knowledge but without the assumption of hidden variables. It examines QBism as well as Healey’s pragmatist view. The author finds both interpretations or programs wanting in certain respects. As a result, he then goes on to advance a genuine proposal as to how to interpret QM from the perspective of an internal realism in the sense of Putnam and Kant. -/- The book also includes two philosophical interludes. One details the notions of probability and realism. The other highlights the connections between the notions of locality, causality, and reality in the context of violations of Bell-type inequalities. (shrink)
Large scale experiments at CERN’s Large Hadron Collider rely heavily on computer simulations, a fact that has recently caught philosophers’ attention. CSs obviously require appropriate modeling, and it is a common assumption among philosophers that the relevant models can be ordered into hierarchical structures. Focusing on LHC’s ATLAS experiment, we will establish three central results here: with some distinct modifications, individual components of ATLAS’ overall simulation infrastructure can be ordered into hierarchical structures. Hence, to a good degree of approximation, hierarchical (...) accounts remain valid at least as descriptive accounts of initial modeling steps. In order to perform the epistemic function Winsberg Model-based reasoning in scientific discovery. Kluwer Academic/plenum Publishers, New York, pp 255–269, 1999) assigns to models in simulation—generate knowledge through a sequence of skillful but non-deductive transformations—ATLAS’ simulation models have to be considered part of a network rather than a hierarchy, in turn making the associated simulation modeling messy rather than motley. Deriving knowledge-claims from this ‘mess’ requires two sources of justification: holistic validation :253–262, 2010; in Carrier M, Nordmann A Science in the context of application. Springer, Berlin, pp 115–130, 2011), and model coherence. As it turns out, the degree of model coherence sets HEP apart from other messy, simulation-intensive disciplines such as climate science, and the reasons for this are to be sought in the historical, empirical and theoretical foundations of the respective discipline. (shrink)
The workshop “Machine Learning: Prediction Without Explanation?” brought together philosophers of science and scholars from various fields who study and employ Machine Learning (ML) techniques, in order to discuss the changing face of science in the light of ML's constantly growing use. One major focus of the workshop was on the impact of ML on the concept and value of scientific explanation. One may speculate whether ML’s increased use in science exemplifies a paradigmatic turn towards mere pattern recognition and prediction (...) and away from science’s traditional aim of explanation. In contrast, certain conceptions of explanation, such as statistical explanation, could turn out to fit well with the achievements of present-day ML and concede an explanatory value to these achievements after all. It is an open question how to explain ML successes themselves, and this question was in the focus of several talks in the workshop. Based on the topics raised, we will discuss the talks’ contents in more detail as organized into (i) practitioners’ perspectives, (ii) explanations from ML, (iii) explanations of ML, (iv) societal implications, and (v) global and historical perspectives. (shrink)
A recent no-go theorem (Frauchiger and Renner in Nat Commun 9(1):3711, 2018) establishes a contradiction from a specific application of quantum theory to a multi- agent setting. The proof of this theorem relies heavily on notions such as ‘knows’ or ‘is certain that’. This has stimulated an analysis of the theorem by Nurgalieva and del Rio (in: Selinger P, Chiribella G (eds) Proceedings of the 15th international conference on quantum physics and logic (QPL 2018). EPTCS 287, Open Publishing Association, Waterloo, (...) 2018), in which they claim that it shows the “[i]nadequacy of modal logic in quantum settings” (ibid.). In this paper, we will offer a significantly extended and refined reconstruction of the theorem in multi-agent modal logic. We will then show that a thorough reconstruction of the proof as given by Frauchiger and Renner requires the reflexivity of access relations (system T). However, a stronger theorem is possible that already follows in serial frames, and hence also holds in systems of doxastic logic (such as KD45). After proving this, we will discuss the general implications for different interpretations of quantum probabilities as well as several options for dealing with the result. (shrink)
Computer simulations are involved in numerous branches of modern science, and science would not be the same without them. Yet the question of how they can explain real-world processes remains an issue of considerable debate. In this context, a range of authors have highlighted the inferences back to the world that computer simulations allow us to draw. I will first characterize the precise relation between computer and target of a simulation that allows us to draw such inferences. I then argue (...) that in a range of scientifically interesting cases they are particular abductions and defend this claim by appeal to two case studies. (shrink)
Tim Maudlin has claimed that EPR’s Reality Criterion is analytically true. We argue that it is not. Moreover, one may be a subjectivist about quantum probabilities without giving up on objective physical reality. Thus, would-be detractors must reject QBism and other epistemic approaches to quantum theory on other grounds.
Tim Maudlin has claimed that EPR’s Reality Criterion is analytically true. We argue that it is not. Moreover, one may be a subjectivist about quantum probabilities without giving up on objective physical reality. Thus, would-be detractors must reject QBism and other epistemic approaches to quantum theory on other grounds.
Howson famously argues that the no-miracles argument, stating that the success of science indicates the approximate truth of scientific theories, is a base rate fallacy: it neglects the possibility of an overall low rate of true scientific theories. Recently a number of authors has suggested that the corresponding probabilistic reconstruction is unjust, as it concerns only the success of one isolated theory. Dawid and Hartmann, in particular, suggest to use the frequency of success in some field of research \ to (...) infer a probability of truth for a new theory from \. I here shed doubts on the justification of this and similar moves and suggest a way to directly bound the probability of truth. As I will demonstrate, my bound can become incompatible with the assumption specific testing and Dawid and Hartmann’s estimate for success. (shrink)
In this paper I investigate whether the phenomenon of quantum decoherence, the vanishing of interference and detectable entanglement on quantum systems in virtue of interactions with the environment, can be understood as the manifestation of a disposition. I will highlight the advantages of this approach as a realist interpretation of the quantum formalism, and demonstrate how such an approach can benefit from advances in the metaphysics of dispositions. I will also confront some commonalities with and differences to the many worlds (...) interpretation, and address the difficulties induced by quantum non-locality. I conclude that there are ways to deal with these issues and that the proposal hence is an avenue worth pursuing. (shrink)
Computer simulations are nowadays often directly involved in the generation of experimental results. Given this dependency of experiments on computer simulations, that of simulations on models, and that of the models on free parameters, how do researchers establish trust in their experimental results? Using high-energy physics (HEP) as a case study, I will identify three different types of robustness that I call conceptual, methodological, and parametric robustness, and show how they can sanction this trust. However, as I will also show, (...) simulation models in HEP themselves fail to exhibit a type of robustness I call inverse parametric robustness. This combination of robustness and failures thereof is best understood by distinguishing different epistemic capacities of simulations and different senses of trust: Trusting simulations in their capacity to facilitate credible experimental results can mean accepting them as means for generating belief in these results, while this need not imply believing the models themselves in their capacity to represent an underlying reality. (shrink)
In ‘Reichenbach's cubical universe and the problem of the external world’, Elliott Sober attempts a refutation of solipsism à la Reichenbach. I here contrast Sober's line of argument with observati...
Two powerful arguments have famously dominated the realism debate in philosophy of science: The No Miracles Argument and the Pessimistic Meta-Induction. A standard response to the PMI is selective scientific realism, wherein only the working posits of a theory are considered worthy of doxastic commitment. Building on the recent debate over the NMA and the connections between the NMA and the PMI, I here consider a stronger inductive argument that poses a direct challenge for SSR: Because it is sometimes exactly (...) the working posits which contradict each other, i.e., that which is directly responsible for empirical success, SSR cannot deliver a general explanation of scientific success. (shrink)
Quantum mechanics notoriously faces the measurement problem, the problem that if read thoroughly, it implies the nonexistence of definite outcomes in measurement procedures. A plausible reaction to this and to related problems is to regard a system's quantum state |ψ> merely as an indication of our lack of knowledge about the system, i.e., to interpret it epistemically. However, there are radically different ways to spell out such an epistemic view of the quantum state. We here investigate recent developments in the (...) branch that introduces hidden variables λ in addition to the quantum state |ψ> and has its roots in Einstein's views. In particular, we confront purported achievements of a concrete model that has been considered to serve as evidence for an epistemic view of the envisioned kind, as well as specific no-go results and their import. It will be argued that while an epistemic account of the particular kind is not straightforwardly ruled out by the no-go results, they demonstrate that the evidential character of the model(s) discussed rests on a rather shaky foundation, and that they make some achievements widely recognized in the literature appear worthy of doubt. (shrink)