Theories of explanation need to account for a puzzling feature of our explanatory practices: the fact that we prefer explanations that are relatively abstract but only moderately so. Contra Franklin-Hall (), I argue that the interventionist account of explanation provides a natural and elegant explanation of this fact. By striking the right balance between specificity and generality, moderately abstract explanations optimally subserve what interventionists regard as the goal of explanation, namely identifying possible interventions that would have changed the explanandum.
Much recent work on explanation in the interventionist tradition emphasizes the explanatory value of stable causal generalizations—i.e., causal generalizations that remain true in a wide range of background circumstances. We argue that two separate explanatory virtues are lumped together under the heading of `stability’. We call these two virtues breadth and guidance respectively. In our view, these two virtues are importantly distinct, but this fact is neglected or at least under-appreciated in the literature on stability. We argue that an adequate (...) theory of explanatory goodness should recognize breadth and guidance as distinct virtues, as breadth and guidance track different ideals of explanation, satisfy different cognitive and pragmatic ends, and play different theoretical roles in helping us understand the explanatory value of mechanisms. Thus keeping track of the distinction between these two forms of stability yields a more accurate and perspicuous picture of the role that stability considerations play in explanation. (shrink)
Occam's razor—the idea that all else being equal, we should pick the simpler hypothesis—plays a prominent role in ordinary and scientific inference. But why are simpler hypotheses better? One attractive hypothesis known as Bayesian Occam's razor is that more complex hypotheses tend to be more flexible—they can accommodate a wider range of possible data—and that flexibility is automatically penalized by Bayesian inference. In two experiments, we provide evidence that people's intuitive probabilistic and explanatory judgments follow the prescriptions of BOR. In (...) particular, people's judgments are consistent with the two most distinctive characteristics of BOR: They penalize hypotheses as a function not only of their numbers of free parameters but also as a function of the size of the parameter space, and they penalize those hypotheses even when their parameters can be “tuned” to fit the data better than comparatively simpler hypotheses. (shrink)
We report three experiments investigating whether people’s judgments about causal relationships are sensitive to the robustness or stability of such relationships across a range of background circumstances. In Experiment 1, we demonstrate that people are more willing to endorse causal and explanatory claims based on stable (as opposed to unstable) relationships, even when the overall causal strength of the relationship is held constant. In Experiment 2, we show that this effect is not driven by a causal generalization’s actual scope of (...) application. In Experiment 3, we offer evidence that stable causal relationships may be seen as better guides to action. Collectively, these experiments document a previously underappreciated factor that shapes people’s causal reasoning: the stability of the causal relationship. (shrink)
More than a century ago, Russell launched a forceful attack on causation, arguing not only that modern physics has no need for causal notions but also that our belief in causation is a relic of a pre-scientific view of the world. He thereby initiated a debate about the relations between physics and causation that remains very much alive today. While virtually everybody nowadays rejects Russell's causal eliminativism, many philosophers have been convinced by Russell that the fundamental physical structure of our (...) world doesn't contain causal relations. This raises the question of how to reconcile the central role of causal concepts in the special sciences and in common sense with the putative absence of causation in fundamental physics. (shrink)
Intuitions play an important role in the debate on the causal status of high‐level properties. For instance, Kim has claimed that his “exclusion argument” relies on “a perfectly intuitive … understanding of the causal relation.” We report the results of three experiments examining whether laypeople really have the relevant intuitions. We find little support for Kim's view and the principles on which it relies. Instead, we find that laypeople are willing to count both a multiply realized property and its realizers (...) as causes, and regard the systematic overdetermination implied by this view as unproblematic. (shrink)
In recent years, an active research program has emerged that aims to develop a Humean best-system account (BSA) of laws of nature that improves on Lewis’s canonical articulation of the view. Its guiding idea is that the laws are cognitive tools tailored to the specific needs and limitations of creatures like us. While current versions of this “pragmatic Humean” research program fare much better than Lewis’s account along many dimensions, I will argue that they have trouble making sense of certain (...) key features of the practice of fundamental physics. Indeed, these features seem to go against the very idea that laws are useful for agents like us. In my view, Humeans can address these issues by paying more attention to the explanatory role of laws. Following this idea, I will propose an account on which what makes a systematization the best is a kind of explanatory power, understood along the lines of the unificationist theory of explanation. The resulting view, I will argue, can make sense of those features of laws that other pragmatic accounts of laws have trouble explaining. (shrink)
Proponents of IBE claim that the ability of a hypothesis to explain a range of phenomena in a unifying way contributes to the hypothesis’s credibility in light of these phenomena. I propose a Bayesian justification of this claim that reveals a hitherto unnoticed role for explanatory unification in evaluating the plausibility of a hypothesis: considerations of explanatory unification enter into the determination of a hypothesis’s prior by affecting its ‘explanatory coherence’, that is, the extent to which the hypothesis offers mutually (...) cohesive explanations of various phenomena. (shrink)
In recent years the notion of biological specificity has attracted significant philosophical attention. This paper focuses on host specificity, a kind of biological specificity that has not yet been discussed by philosophers, and which concerns the extent to which a species is selective in the range of other species it exploits for feeding and/or reproduction. Host specificity is an important notion in ecology, where it plays a variety of theoretical roles. Here I focus on the role of host specificity in (...) biological control, a field of applied ecology that deals with the suppression of pests through the use of living organisms. Examining host specificity and its role in biological control yields several valuable contributions to our understanding of biological specificity. In particular, I argue that host specificity cannot be fully understood in terms of Woodward’s well-known account of causal specificity. To adequately account for host specificity, we need a notion of causal specificity that takes into consideration the extent to which a variable’s effects are similar to one another – a dimension not captured in Woodward’s account. In addition, the literature on host specificity in biological control highlights certain aspects in which causally specific relationships can be practically valuable that have not yet been addressed in philosophical discussions of specificity. That literature also reveals that in certain contexts specificity can hinder rather than foster effective control, thus leading to a nuanced assessment of the practical value of specific causes. (shrink)
The epidemiologist Bradford Hill famously argued that in epidemiology, specificity of association (roughly, the fact that an environmental or behavioral risk factor is associated with just one or at most a few medical outcomes) is strong evidence of causation. Prominent epidemiologists have dismissed Hill’s claim on the ground that it relies on a dubious `one-cause one effect’ model of disease causation. The paper examines this methodological controversy, and argues that specificity considerations do have a useful role to play in causal (...) inference in epidemiology. More precisely, I argue that specificity considerations help solve a pervasive inferential problem in contemporary epidemiology: the problem of determining whether an exposure-outcome correlation might be due to confounding by a social factor. This examination of specificity has interesting consequences for our understanding of the methodology of epidemiology. It highlights how the methodology of epidemiology relies on local tools designed to address specific inference problems peculiar to the discipline, and shows that observational causal inference in epidemiology can proceed with little prior knowledge of the causal structure of the phenomenon investigated. I also argue that specificity of association cannot (despite claims to the contrary) be entirely explained in terms of Woodward’s well-known concept of “one-to-one” causal specificity. This is because specificity as understood by epidemiologists depends on whether an exposure (or outcome) is associated with a `heterogeneous’ set of variables. This dimension of heterogeneity is not captured in Woodward’s notion, but is crucial for understanding the evidential import of specificity of association. (shrink)
This paper argues that the interventionist account of causation faces a dilemma concerning macroscopic causation – i.e., causation by composite objects. Interventionism must either require interventions on a composite object to hold the behavior of its parts fixed, or allow such interventions to vary the behavior of those parts. The first option runs the risk of making wholes causally excluded by their parts, while the second runs the risk of mistakenly ascribing to wholes causal abilities that belong to their parts (...) only. Using as starting point Baumgartner’s well-known argument that interventionism leads to causal exclusion of multiply realized properties, I first show that a similar interventionist exclusion argument can be mounted against the causal efficacy of composite objects. I then show that Woodward’s (2015) updated interventionist account (explicitly designed to address exclusion worries) avoids this problem, but runs into an opposite issue of over-inclusion: it grants to composites causal abilities that belong to their parts only. Finally, I examine two other interventionist accounts designed to address Baumgartner’s argument, and show that when applied to composites, they too fall on one horn (exclusion) or the other (over-inclusion) of the dilemma. I conclude that the dilemma constitutes an open and difficult issue for interventionism. (shrink)
This paper argues that the knowledge asymmetry (the fact that we know more about the past than the future) can be explained as a consequence of the causal Markov condition. The causal Markov condition implies that causes of a common effect are generally statistically independent, whereas effects of a common cause are generally correlated. I show that together with certain facts about the physics of our world, the statistical independence of causes severely limits our ability to predict the future, whereas (...) correlations between joint effects make it so that no such limitation holds in the reverse temporal direction. Insofar as the fact that our world conforms to the causal Markov condition can itself be explained in terms of the initial conditions of the universe, my view is compatible with Albert’s well-known account of the origins of temporal asymmetries, but also provides a more illuminating way to derive the knowledge asymmetry from those initial conditions. (shrink)
Moral responsibility is an issue at the heart of the free-will debate. The question of how we can have moral responsibility in a deterministic world is an interesting and puzzling one. Compatibilists arguments have left open the possibility that the ability to do otherwise is not required for moral responsibility. The challenge, then, is to come up with what our attributions of moral responsibility are tracking. To do this, criteria which can adequately differentiate cases in which the agent is responsible (...) from cases in which the agent is not responsible are required. I argue that an agent is responsible for the consequences of an action if they stem, in an appropriate way, from the agent's deep values and desires. These deep values and desires make up the Deep Self. Parts of the Deep Self, first, tend to be enduring; second, desires within it tend to be general ; third, they tend to be reflectively endorsed by the agent; fourth, these traits are often central to the agent's self-conception; and fifth, they are not generally in extreme conflict with other deep traits. Empirical work is drawn upon to help develop a suitable account of what deserves to be called a part of the Deep Self. I also strengthen and extend this view by considering issues of poor judgement and weakness of will, and when and how we can be considered responsible for them. (shrink)
I raise two issues for Machery's discussion and interpretation of the theory-theory. First, I raise an objection against Machery's claim that theory-theorists take theories to be default bodies of knowledge. Second, I argue that theory-theorists' experimental results do not support Machery's contention that default bodies of knowledge include theories used in their own proprietary kind of categorization process.