I argue that causation is a contrastive relation: c-rather-than-C* causes e-rather-than-E*, where C* and E* are contrast classes associated respectively with actual events c and e. I explain why this is an improvement on the traditional binary view, and develop a detailed definition. It turns out that causation is only well defined in ‘uniform’ cases, where either all or none of the members of C* are related appropriately to members of E*.
We make the case that the Prisoner’s Dilemma, notwithstanding its fame and the quantity of intellectual resources devoted to it, has largely failed to explain any phenomena of social scientific or biological interest. In the heart of the paper we examine in detail a famous purported example of Prisoner’s Dilemma empirical success, namely Axelrod’s analysis of WWI trench warfare, and argue that this success is greatly overstated. Further, we explain why this negative verdict is likely true generally and not just (...) in our case study. We also address some possible defenses of the Prisoner’s Dilemma. (shrink)
I present a new definition of verisimilitude, framed in terms of causes. Roughly speaking, according to it a scientific model is approximately true if it captures accurately the strengths of the causes present in any given situation. Against much of the literature, I argue that any satisfactory account of verisimilitude must inevitably restrict its judgments to context-specific models rather than general theories. We may still endorse—and only need—a relativized notion of scientific progress, understood now not as global advance but rather (...) as the mastering of particular problems. This also sheds new light on longstanding difficulties surrounding language-dependence and models committed to false ontologies. (shrink)
The 1994 US spectrum auction is now a paradigmatic case of the successful use of microeconomic theory for policy-making. We use a detailed analysis of it to review standard accounts in philosophy of science of how idealized models are connected to messy reality. We show that in order to understand what made the design of the spectrum auction successful, a new such account is required, and we present it here. Of especial interest is the light this sheds on the issue (...) of progress in economics. In particular, it enables us to get clear on exactly what has been progressing, and on exactly what theory has – and has not – contributed to that. This in turn has important implications for just what it is about economic theory that we should value. (shrink)
We propose a novel account of the distinction between innate and acquired biological traits: biological traits are innate to the degree that they are caused by factors intrinsic to the organism at the time of its origin; they are acquired to the degree that they are caused by factors extrinsic to the organism. This account borrows from recent work on causation in order to make rigorous the notion of quantitative contributions to traits by different factors in development. We avoid the (...) pitfalls of previous accounts and argue that the distinction between innate and acquired traits is scientifically useful. We therefore address not only previous accounts of innateness but also skeptics about any account. The two are linked, in that a better account of innateness also enables us better to address the skeptics. (shrink)
Has the rise of data-intensive science, or ‘big data’, revolutionized our ability to predict? Does it imply a new priority for prediction over causal understanding, and a diminished role for theory and human experts? I examine four important cases where prediction is desirable: political elections, the weather, GDP, and the results of interventions suggested by economic experiments. These cases suggest caution. Although big data methods are indeed very useful sometimes, in this paper’s cases they improve predictions either limitedly or not (...) at all, and their prospects of doing so in the future are limited too. (shrink)
Much recent work in neuroscience aims to shed light on whether we have free will. Can it? Can any science? To answer, we need to disentangle different notions of free will, and clarify what we mean by ‘empirical’ and ‘testable’. That done, my main conclusion is, duly interpreted: that free will is not a testable hypothesis. In particular, it is neither verifiable nor falsifiable by empirical evidence. The arguments for this are not a priori but rather are based on a (...) posteriori consideration of the relevant neuroscientific investigations, as well as on standard philosophy of science work on the notion of testability. (shrink)
Denis Walsh has written a striking new defense in this journal of the statisticalist (i.e., noncausalist) position regarding the forces of evolution. I defend the causalist view against his new objections. I argue that the heart of the issue lies in the nature of nonadditive causation. Detailed consideration of that turns out to defuse Walsh’s ‘description‐dependence’ critique of causalism. Nevertheless, the critique does suggest a basis for reconciliation between the two competing views. *Received December 2009; revised December 2009. †To contact (...) the author, please write to: Department of Philosophy, 599 Lucas Hall, One University Boulevard, University of Missouri, St. Louis, MO 63121; e‐mail: [email protected]. (shrink)
Julian Reiss correctly identified a trilemma about economic models: we cannot maintain that they are false, but nevertheless explain and that only true accounts explain. In this reply we give reasons to reject the second premise ? that economic models explain. Intuitions to the contrary should be distrusted.
I propose an analysis of harm in terms of causation: harm is when a subject is caused to be worse off. The pay-off from this lies in the details. In particular, importing influential recent work from the causation literature yields a contrastive-counterfactual account. This enables us to incorporate harm's multiple senses into a unified scheme, and to provide that scheme with theoretical ballast. It also enables us to respond effectively to previous criticisms of counterfactual accounts, as well as to sharpen (...) criticisms of rival views. (shrink)
Partial explanations are everywhere. That is, explanations citing causes that explain some but not all of an effect are ubiquitous across science, and these in turn rely on the notion of degree of explanation. I argue that current accounts are seriously deficient. In particular, they do not incorporate adequately the way in which a cause’s explanatory importance varies with choice of explanandum. Using influential recent contrastive theories, I develop quantitative definitions that remedy this lacuna, and relate it to existing measures (...) of degree of causation. Among other things, this reveals the precise role here of chance, as well as bearing on the relation between causal explanation and causation itself. (shrink)
Can purely predictive models be useful in investigating causal systems? I argue ‘yes’. Moreover, in many cases not only are they useful, they are essential. The alternative is to stick to models or mechanisms drawn from well-understood theory. But a necessary condition for explanation is empirical success, and in many cases in social and field sciences such success can only be achieved by purely predictive models, not by ones drawn from theory. Alas, the attempt to use theory to achieve explanation (...) or insight without empirical success therefore fails, leaving us with the worst of both worlds – neither prediction nor explanation. Best go with empirical success by any means necessary. I support these methodological claims via case studies of two impressive feats of predictive modelling: opinion polling of political elections, and weather forecasting. (shrink)
The statistical technique of analysis of variance is often used by biologists as a measure of causal factors’ relative strength or importance. I argue that it is a tool ill suited to this purpose, on several grounds. I suggest a superior alternative, and outline some implications. I finish with a diagnosis of the source of error – an unwitting inheritance of bad philosophy that now requires the remedy of better philosophy.
The causal impacts of genes and environment on any one biological trait are inextricably entangled, and consequently it is widely accepted that it makes no sense in singleton cases to privilege either factor for particular credit. On the other hand, at a population level it may well be the case that one of the factors is responsible for more variation than the other. Standard methodological practice in biology uses the statistical technique of analysis of variance to measure this latter kind (...) of causal efficacy. In this paper, I argue that: 1) analysis of variance is in fact badly suited to this role; and. (shrink)
Election prediction by means of opinion polling is a rare empirical success story for social science. I examine the details of a prominent case, drawing two lessons of more general interest: Methodology over metaphysics. Traditional metaphysical criteria were not a useful guide to whether successful prediction would be possible; instead, the crucial thing was selecting an effective methodology. Which methodology? Success required sophisticated use of case-specific evidence from opinion polling. The pursuit of explanations via general theory or causal mechanisms, by (...) contrast, turned out to be precisely the wrong path—contrary to much recent philosophy of social science. (shrink)
Comparisons of causal efficacy are ubiquitous in the practice of science and indeed everyday life. I focus on just one aspect of this task – one to my knowledge nowhere yet addressed satisfactorily – namely, comparing the efficacies of two causes that work in apparently incommensurable ways. Contrary to common opinion I argue that, to be comparable, it is neither necessary nor sufficient that two causes also be commensurable.
What kind of epidemiological modeling works well? This is determined by the nature of the target: the relevant causal relations are unstable across contexts. I look at two influential examples of modeling from the Covid pandemic. The first is the paper from Imperial College London, which, in March 2020, was influential in persuading the UK government to impose a lockdown. Because it assumes stability, this first example of modeling fails. A different modeling strategy is required, one less ambitious but more (...) effective. This is illustrated by a second paper from Imperial College London, which, in December 2020, first estimated the transmissibility of the Alpha variant. (shrink)
, whereby some causes are deemed more important than others, are ubiquitous in historical studies. Drawing from influential recent work on causation, I develop a definition of causal-explanatory strength. This makes clear exactly which aspects of explanatory weighting are subjective and which objective. It also sheds new light on several traditional issues, showing for instance that: underlying causes need not be more important than proximate ones; several different causes can each be responsible for most of an effect; small causes need (...) not be less important than big ones; and non-additive interactive effects between causes present no particular difficulty. Key Words: causation • explanation • history • interaction • proximate • underlying. (shrink)
A definition of causation as probability-raising is threatened by two kinds of counterexample: first, when a cause lowers the probability of its effect; and second, when the probability of an effect is raised by a non-cause. In this paper, I present an account that deals successfully with problem cases of both these kinds. In doing so, I also explore some novel implications of incorporating into the metaphysical investigation considerations of causal psychology.
Pre-emption cases have been taken by almost everyone to imply the unviability of the simple counterfactual theory of causation. Yet there is ample motivation from scientific practice to endorse a simple version of the theory if we can. There is a way in which a simple counterfactual theory, at least if understood contrastively, can be supported even while acknowledging that intuition goes firmly against it in pre-emption cases—or rather, only in some of those cases. For I present several new pre-emption (...) cases in which causal intuition does not go against the counterfactual theory, a fact that has been verified experimentally. I suggest an account of framing effects that can square the circle. Crucially, this account offers hope of theoretical salvation—but only to the counterfactual theory of causation, not to others. Again, there is (admittedly only preliminary) experimental support for this account. (shrink)
This paper responds to Kenneth Waters’s account of actual difference making. Among other things, I argue that although Waters is right that researchers may sometimes be justified in focusing on genes rather than other causes of phenotypic traits, he is wrong that the apparatus of actual difference makers overcomes the traditional causal parity thesis.
Standard statistical measures of strength of association, although pioneered by Pearson deliberately to be acausal, nowadays are routinely used to measure causal efficacy. But their acausal origins have left them ill suited to this latter purpose. I distinguish between two different conceptions of causal efficacy, and argue that: 1) Both conceptions can be useful 2) The statistical measures only attempt to capture the first of them 3) They are not fully successful even at this 4) An alternative definition more squarely (...) based on causal thinking not only captures the second conception, it can also capture the first one better too. (shrink)
Reflexivity is, roughly, when studying or theorising about a target itself influences that target. Fragility is, roughly, when causal or other relations are hard to predict, holding only intermittently or fleetingly. Which is more important, methodologically? By going systematically through cases that do and do not feature each of them, I conclude that it is fragility that matters, not reflexivity. In this light, I interpret and extend the claims made about reflexivity in a recent paper by Jessica Laimann. I finish (...) by assessing the benefits and costs of a focus on reflexivity. (shrink)
Comparing different causes’ importance, and apportioning responsibility between them, requires making good sense of the notion of partial explanation, that is, of degree of explanation. How much is this subjective, how much objective? If the causes in question are probabilistic, how much is the outcome due to them and how much to simple chance? I formulate the notion of degree of causation, or effect size, relating it to influential recent work in the literature on causation. I examine to what extent (...) mainstream social science methods--both quantitative and qualitative--succeed in establishing effect sizes so understood. The answer turns out to be, roughly: only to some extent. Next, the standard understanding of effect size, even though widespread, still has several underappreciated consequences. I detail some of those. Finally, I discuss the separate issue of explanandum-dependence, which is essential to assessing any cause’s explanatory importance and yet which has been comparatively neglected. (shrink)
I use a contrastive theory of causal explanation to analyze the notion of a genetic trait. The resulting definition is relational, an implication of which is that no trait is genetic always and everywhere. Rather, every trait may be either genetic or non-genetic, depending on explanatory context. I also outline some other advantages of connecting the debate to the wider causation literature, including how that yields us an account of the distinction between genetic traits and genetic dispositions.
Should we insist on prediction, i.e. on correctly forecasting the future? Or can we rest content with accommodation, i.e. empirical success only with respect to the past? I apply general considerations about this issue to the case of economics. In particular, I examine various ways in which mere accommodation can be sufficient, in order to see whether those ways apply to economics. Two conclusions result. First, an entanglement thesis: the need for prediction is entangled with the methodological role of orthodox (...) economic theory. Second, a conditional predictivism: if we are not committed to orthodox economic theory, then we should demand prediction rather than accommodation – against most current practice. (shrink)
To succeed, political science usually requires either prediction or contextual historical work. Both of these methods favor explanations that are narrow-scope, applying to only one or a few cases. Because of the difficulty of prediction, the main focus of political science should often be contextual historical work. These epistemological conclusions follow from the ubiquity of causal fragility, under-determination, and noise. They tell against several practices that are widespread in the discipline: wide-scope retrospective testing, such as much large-n statistical work; lack (...) of emphasis on prediction; and resources devoted to ‘pure theory’ divorced from frequent empirical application. I illustrate, via Donatella della Porta’s work on political violence, the important role that is still left for theory. I conclude by assessing the scope for political science to offer policy advice. (shrink)
This is a chapter written for a popular audience, in which I use poker as a convenient illustration of probability, determinism and counterfactuals. More originally, I also discuss the roles of rationality versus psychological hunches, and explain why even in principle game theory cannot provide us the panacea of a perfect winning srategy. (N.B. The document I have uploaded here is slightly longer than the abbreviated version that appears in the book, and also differs in a few other minor details.) (...) . (shrink)
I present a new case in which the Doomsday Argument runs afoul of epistemic intuition much more strongly than before. This leads to a dilemma: in the new case either DA is committed to unacceptable counterintuitiveness and belief in miracles, or else it is irrelevant. I then explore under what conditions DA can escape this dilemma. The discussion turns on several issues that have not been much emphasised in previous work on DA: a concern that I label trumping; the degree (...) of uncertainty about relevant probability estimates; and the exact sequence in which we integrate DA and empirical concerns. I conclude that only given a particular configuration of these factors might DA still be of interest. (shrink)
In this book chapter written for a popular audience, I discuss classic issues surrounding luck, determinism and probability in the context of the penalty shoot-outs used in football’s World Cup. Can it ever make objective sense to blame an outcome on bad luck? I go on to discuss whether we can legitimately pin the blame on any one factor at all, such as a referee. This takes us into issues surrounding the apportioning of causal responsibility.
It is often claimed that only experiments can support strong causal inferences and therefore they should be privileged in the behavioral sciences. We disagree. Overvaluing experiments results in their overuse both by researchers and decision-makers, and in an underappreciation of their shortcomings. Neglecting other methods often follows. Experiments can suggest whether X causes Y in a specific experimental setting; however, they often fail to elucidate either the mechanisms responsible for an effect, or the strength of an effect in everyday natural (...) settings. In this paper, we consider two overarching issues. First, experiments have important limitations. We highlight problems with: external, construct, statistical conclusion, and internal validity; replicability; and with conceptual issues associated with simple X-causes-Y thinking. Second, quasi-experimental and non-experimental methods are absolutely essential. As well as themselves estimating causal effects, these other methods can provide information and understanding that goes beyond that provided by experiments. A research program progresses best when experiments are not treated as privileged but instead are combined with these other methods. (shrink)
Jonathan Schaffer (2004 ) proposes an ingenious amendment to David Lewis's semantics for counterfactuals. This amendment explicitly invokes the notion of causal independence, thus giving up Lewis's ambitions for a reductive counterfactual account of causation. But in return, it rescues Lewis's semantics from extant counterexamples. I present a new counterexample that defeats even Schaffer's amendment. Further, I argue that a better approach would be to follow the causal modelling literature and evaluate counterfactuals via an explicit postulated causal structure. This alternative (...) approach easily resolves the new counterexample, as well as all the previous ones. Up to now, its perceived drawback relative to Lewis's scheme has been its non-reductiveness. But since the same drawback applies equally to Schaffer's amended scheme, this becomes no longer a point of comparative disadvantage. (shrink)
Although a huge range of definitions has accumulated in the philosophy, biology and psychology literatures, no consensus has been reached on exactly what innateness amounts to. This has helped fuel an increasing skepticism, one that views the concept as anachronistic and actually harmful to science. Yet it remains central to many life sciences, and to several public policy issues too. So it is correspondingly urgent that its philosophical underpinnings be properly cleaned up. In this paper, I present a new approach (...) that endorses a role in science for innateness while also accommodating many of the skeptical concerns. The key to squaring the circle is to import influential recent work on causal explanation. My thesis is that ascriptions of innateness are best seen as explanatory claims. The account that results has three main original features: 1) Innateness is a pragmatic, relational concept. Every trait may be either innate or non-innate, depending on explanatory context. 2) There is an important distinction between innate traits and innate dispositions. 3) Innateness is useful to science as a higher-level predicate that licenses interventions. It is thereby also clarified what ascriptions of innateness do not tell us. (shrink)