Recent work on the hole argument in general relativity by Weatherall has drawn attention to the neglected concept of models’ representational capacities. I argue for several theses about the structure of these capacities, including that they should be understood not as many-to-one relations from models to the world, but in general as many-to-many relations constrained by the models’ isomorphisms. I then compare these ideas with a recent argument by Belot for the claim that some isometries “generate new possibilities” in general (...) relativity. Philosophical orthodoxy, by contrast, denies this. Properly understanding the role of representational capacities, I argue, reveals how Belot’s rejection of orthodoxy does not go far enough, and makes better sense of our practices in theorizing about spacetime. (shrink)
Stephen Hawking, among others, has proposed that the topological stability of a property of space-time is a necessary condition for it to be physically significant. What counts as stable, however, depends crucially on the choice of topology. Some physicists have thus suggested that one should find a canonical topology, a single ‘right’ topology for every inquiry. While certain such choices might be initially motivated, some little-discussed examples of Robert Geroch and some propositions of my own show that the main candidates—and (...) each possible choice, to some extent—faces the horns of a no-go result. I suggest that instead of trying to decide what the ‘right’ topology is for all problems, one should let the details of particular types of problems guide the choice of an appropriate topology. (shrink)
Intertheoretic reduction in physics aspires to be both to be explanatory and perfectly general: it endeavors to explain why an older, simpler theory continues to be as successful as it is in terms of a newer, more sophisticated theory, and it aims to relate or otherwise account for as many features of the two theories as possible. Despite often being introduced as straightforward cases of intertheoretic reduction, candidate accounts of the reduction of general relativity to Newtonian gravitation have either been (...) insufficiently general or rigorous, or have not clearly been able to explain the empirical success of Newtonian gravitation. Building on work by Ehlers and others, I propose a different account of the reduction relation that is perfectly general and meets the explanatory demand one would make of it. In doing so, I highlight the role that a topology on the collection of all spacetimes plays in defining the relation, and how the selection of the topology corresponds with broader or narrower classes of observables that one demands be well-approximated in the limit. (shrink)
Traditionally, logic has been the dominant formal method within philosophy. Are logical methods still dominant today, or have the types of formal methods used in philosophy changed in recent times? To address this question, we coded a sample of philosophy papers from the late 2000s and from the late 2010s for the formal methods they used. The results indicate that the proportion of papers using logical methods remained more or less constant over that time period but the proportion of papers (...) using probabilistic methods was approximately three times higher in the late 2010s than it was in the late 2000s. Further analyses explored this change by looking more closely at specific methods, specific levels of technical engagement, and specific subdisciplines within philosophy. These analyses indicate that the increasing proportion of papers using probabilistic methods was pervasive, not confined to particular probabilistic methods, levels of sophistication, or subdisciplines. (shrink)
Can quantum theory provide examples of metaphysical indeterminacy, indeterminacy that obtains in the world itself, independently of how one represents the world in language or thought? We provide a positive answer assuming just one constraint of orthodox quantum theory: the eigenstate-eigenvalue link. Our account adds a modal condition to preclude spurious indeterminacy in the presence of superselection sectors. No other extant account of metaphysical indeterminacy in quantum theory meets these demands.
Although computation and the science of physical systems would appear to be unrelated, there are a number of ways in which computational and physical concepts can be brought together in ways that illuminate both. This volume examines fundamental questions which connect scholars from both disciplines: is the universe a computer? Can a universal computing machine simulate every physical process? What is the source of the computational power of quantum computers? Are computational approaches to solving physical problems and paradoxes always fruitful? (...) Contributors from multiple perspectives reflecting the diversity of thought regarding these interconnections address many of the most important developments and debates within this exciting area of research. Both a reference to the state of the art and a valuable and accessible entry to interdisciplinary work, the volume will interest researchers and students working in physics, computer science, and philosophy of science and mathematics. (shrink)
How can inferences from models to the phenomena they represent be justified when those models represent only imperfectly? Pierre Duhem considered just this problem, arguing that inferences from mathematical models of phenomena to real physical applications must also be demonstrated to be approximately correct when the assumptions of the model are only approximately true. Despite being little discussed among philosophers, this challenge was taken up by mathematicians and physicists both contemporaneous with and subsequent to Duhem, yielding a novel and rich (...) mathematical theory of stability with epistemological consequences. (shrink)
We offer a framework for organizing the literature regarding the debates revolving around infinite idealizations in science, and a short summary of the contributions to this special issue.
The clock hypothesis of relativity theory equates the proper time experienced by a point particle along a timelike curve with the length of that curve as determined by the metric. Is it possible to prove that particular types of clocks satisfy the clock hypothesis, thus genuinely measure proper time, at least approximately? Because most real clocks would be enormously complicated to study in this connection, focusing attention on an idealized light clock is attractive. The present paper extends and generalized partial (...) results along these lines with a theorem showing that, for any timelike curve in any spacetime, there is a light clock that measures the curve’s length as accurately and regularly as one wishes. (shrink)
The replicability crisis refers to the apparent failures to replicate both important and typical positive experimental claims in psychological science and biomedicine, failures which have gained increasing attention in the past decade. In order to provide evidence that there is a replicability crisis in the first place, scientists have developed various measures of replication that help quantify or “count” whether one study replicates another. In this nontechnical essay, I critically examine five types of replication measures used in the landmark article (...) “Estimating the reproducibility of psychological science” based on the following techniques: subjective assessment, null hypothesis significance testing, comparing effect sizes, comparing the original effect size with the replication confidence interval, and meta-analysis. The first four, I argue, remain unsatisfactory for a variety of conceptual or formal reasons, even taking into account various improvements. By contrast, at least one version of the meta-analytic measure does not suffer from these problems. It differs from the others in rejecting dichotomous conclusions, the assumption that one study replicates another or not simpliciter. I defend it from other recent criticisms, concluding however that it is not a panacea for all the multifarious problems that the crisis has highlighted. (shrink)
If the force on a particle fails to satisfy a Lipschitz condition at a point, it relaxes one of the conditions necessary for a locally unique solution to the particle’s equation of motion. I examine the most discussed example of this failure of determinism in classical mechanics—that of Norton’s dome—and the range of current objections against it. Finding there are many different conceptions of classical mechanics appropriate and useful for different purposes, I argue that no single conception is preferred. Instead (...) of arguing for or against determinism, I stress the wide variety of pragmatic considerations that, in a specific context, may lead one usefully and legitimately to adopt one conception over another in which determinism may or may not hold. (shrink)
We consider various curious features of general relativity, and relativistic field theory, in two spacetime dimensions. In particular, we discuss: the vanishing of the Einstein tensor; the failure of an initial-value formulation for vacuum spacetimes; the status of singularity theorems; the non-existence of a Newtonian limit; the status of the cosmological constant; and the character of matter fields, including perfect fluids and electromagnetic fields. We conclude with a discussion of what constrains our understanding of physics in different dimensions.
The concept of emergence is commonly invoked in modern physics but rarely defined. Building on recent influential work by Jeremy Butterfield, I provide precise definitions of emergence concepts as they pertain to properties represented in models, applying them to some basic examples from space-time and thermostatistical physics. The chief formal innovation I employ, similarity structure, consists in a structured set of similarity relations among those models under analysis—and their properties—and is a generalization of topological structure. Although motivated from physics, this (...) similarity-structure-based account of emergence applies to any science that represents its possibilia with models. (shrink)
If one is interested in reasoning counterfactually within a physical theory, one cannot adequately use the standard possible world semantics. As developed by Lewis and others, this semantics depends on entertaining possible worlds with miracles, worlds in which laws of nature, as described by physical theory, are violated. Van Fraassen suggested instead to use the models of a theory as worlds, but gave up on determining the needed comparative similarity relation for the semantics objectively. I present a third way, in (...) which this similarity relation is determined from properties of the models contextually relevant to the truth of the counterfactual under evaluation. After illustrating this with a simple example from thermodynamics, I draw some implications for future work, including a renewed possibility for a viable deflationary account of laws of nature. (shrink)
One implication of Bell’s theorem is that there cannot in general be hidden variable models for quantum mechanics that both are noncontextual and retain the structure of a classical probability space. Thus, some hidden variable programs aim to retain noncontextuality at the cost of using a generalization of the Kolmogorov probability axioms. We generalize a theorem of Feintzeig to show that such programs are committed to the existence of a finite null cover for some quantum mechanical experiments, i.e., a finite (...) collection of probability zero events whose disjunction exhausts the space of experimental possibilities. (shrink)
Merely approximate symmetry is mundane enough in physics that one rarely finds any explication of it. Among philosophers it has also received scant attention compared to exact symmetries. Herein I invite further consideration of this concept that is so essential to the practice of physics and interpretation of physical theory. After motivating why it deserves such scrutiny, I propose a minimal definition of approximate symmetry—that is, one that presupposes as little structure on a physical theory to which it is applied (...) as seems needed. Then I apply this definition to three topics: first, accounting for or explaining the symmetries of a theory emeritus in intertheoretic reduction; second, explicating and evaluating the Curie-Post principle; and third, a new account of accidental symmetry. (shrink)
The replication or reproducibility crisis in psychological science has renewed attention to philosophical aspects of its methodology. I provide herein a new, functional account of the role of replication in a scientific discipline: to undercut the underdetermination of scientific hypotheses from data, typically by hypotheses that connect data with phenomena. These include hypotheses that concern sampling error, experimental control, and operationalization. How a scientific hypothesis could be underdetermined in one of these ways depends on a scientific discipline’s epistemic goals, theoretical (...) development, material constraints, institutional context, and their interconnections. I illustrate how these apply to the case of psychological science. I then contrast this “bottom-up” account with “top-down” accounts, which assume that the role of replication in a particular science, such as psychology, must follow from a uniform role that it plays in science generally. Aside from avoiding unaddressed problems with top-down accounts, my bottom-up account also better explains the variability of importance of replication of various types across different scientific disciplines. (shrink)
I provide a formally precise account of diachronic emergence of properties as described within scientific theories, extending a recent account of synchronic emergence using similarity structure on the theories’ models. This similarity structure approach to emergent properties unifies the synchronic and diachronic types by revealing that they only differ in how they delineate the domains of application of theories. This allows it to apply also to cases where the synchronic/diachronic distinction is unclear, such as spacetime emergence from theories of quantum (...) gravity. In addition, I discuss two further case studies—finite periodicity in van der Pol oscillators and two-dimensional quasiparticles in the fractional quantum Hall effect—to facilitate comparison of this approach to others in the literature on concepts of emergence applicable to the sciences. My discussion of the fractional quantum Hall effect in particular may be of independent interest to philosophers of physics concerned with its interpretation. (shrink)
Both early analytic philosophy and the branch of mathematics now known as topology were gestated and born in the early part of the 20th century. It is not well recognized that there was early interaction between the communities practicing and developing these fields. We trace the history of how topological ideas entered into analytic philosophy through two migrations, an earlier one conceiving of topology geometrically and a later one conceiving of topology algebraically. This allows us to reassess the influence and (...) significance of topological methods for philosophy, including the possible fruitfulness of a third conception of topology as a structure determining similarity. (shrink)
We implement a recent characterization of metaphysical indeterminacy in the context of orthodox quantum theory, developing the syntax and semantics of two propositional logics equipped with determinacy and indeterminacy operators. These logics, which extend a novel semantics for standard quantum logic that accounts for Hilbert spaces with superselection sectors, preserve different desirable features of quantum logic and logics of indeterminacy. In addition to comparing the relative advantages of the two, we also explain how each logic answers Williamson’s challenge to any (...) substantive account of determinacy: For any proposition p, what could the difference between “p” and “it’s determinate that p” ever amount to? (shrink)
The role or function of experimental and observational replication within empirical science has implications for how replication should be measured. Broadly, there seems to be consensus that replication’s central goal is to confirm or vouchsafe the reliability of scientific findings. I argue that if this consensus is correct, then most of the measures of replication used in the scientific literature are actually poor indicators of this reliability or confirmation. Only meta-analytic measures of replication align functionally with the goals of replication. (...) I conclude by addressing some objections to meta-analysis. (shrink)
This review concerns the notions of physical possibility and necessity as they are informed by contemporary physical theories and the reconstructive explications of past physical theories according to present standards. Its primary goal is twofold: first, to motivate and introduce a range of accessible issues of philosophical relevance around these notions; and second, to provide extensive references to the research literature on them. Although I will have occasion to comment on the direction and shape of this literature, pointing out certain (...) lacunae in argument or scholarly attention, I intend to advance no overriding thesis or point of view, aside from the selection of issues I deem most interesting. (shrink)
A “stopping rule” in a sequential experiment is a rule or procedure for deciding when that experiment should end. Accordingly, the “stopping rule principle” states that, in a sequential experiment, the evidential relationship between the final data and an hypothesis under consideration does not depend on the experiment’s stopping rule: the same data should yield the same evidence, regardless of which stopping rule was used. In this essay, I reconstruct and rebut five independent arguments for the SRP. Reminding oneself that (...) the stopping rule is a part of an experiment’s design and is no more mysterious than many other design aspects helps elucidate why some of these arguments for the SRP are unsound. (shrink)
Based on three common interpretive commitments in general relativity, I raise a conceptual problem for the usual identification, in that theory, of timelike curves as those that represent the possible histories of particles in spacetime. This problem affords at least three different solutions, depending on different representational and ontological assumptions one makes about the nature of particles, fields, and their modal structure. While I advocate for a cautious pluralism regarding these options, I also suggest that re-interpreting particles as field processes (...) offers the most promising route for natural integration with the physics of material phenomena, including quantum theory. (shrink)
Much has been written as of late on the status of the physical Church- Turing thesis and the relation between physics and computer science in general. The following discussion will focus on one such article [5]. The purpose of these notes is not so much to argue for a particular thesis as it is to solicit a dialog that will help clarify our own thoughts.
The likelihood principle is typically understood as a constraint on any measure of evidence arising from a statistical experiment. It is not sufficiently often noted, however, that the LP assumes that the probability model giving rise to a particular concrete data set must be statistically adequate—it must “fit” the data sufficiently. In practice, though, scientists must make modeling assumptions whose adequacy can nevertheless then be verified using statistical tests. My present concern is to consider whether the LP applies to these (...) techniques of model verification. If one does view model verification as part of the inferential procedures that the LP intends to constrain, then there are certain crucial tests of model verification that no known method satisfying the LP can perform. But if one does not, the degree to which these assumptions have been verified is bracketed from the evidential evaluation under the LP. Although I conclude from this that the LP cannot be a universal constraint on any measure of evidence, proponents of the LP may hold out for a restricted version thereof, either as a kind of “ideal” or as defining one among many different forms of evidence. (shrink)