Objective Bayesianism is a methodological theory that is currently applied in statistics, philosophy, artificial intelligence, physics and other sciences. This book develops the formal and philosophical foundations of the theory, at a level accessible to a graduate student with some familiarity with mathematical notation.
After a decade of intense debate about mechanisms, there is still no consensus characterization. In this paper we argue for a characterization that applies widely to mechanisms across the sciences. We examine and defend our disagreements with the major current contenders for characterizations of mechanisms. Ultimately, we indicate that the major contenders can all sign up to our characterization.
We argue that the health sciences make causal claims on the basis of evidence both of physical mechanisms, and of probabilistic dependencies. Consequently, an analysis of causality solely in terms of physical mechanisms or solely in terms of probabilistic relationships, does not do justice to the causal claims of these sciences. Yet there seems to be a single relation of cause in these sciences - pluralism about causality will not do either. Instead, we maintain, the health sciences require a theory (...) of causality that unifies its mechanistic and probabilistic aspects. We argue that the epistemic theory of causality provides the required unification. (shrink)
Evidence-based medicine (EBM) makes use of explicit procedures for grading evidence for causal claims. Normally, these procedures categorise evidence of correlation produced by statistical trials as better evidence for a causal claim than evidence of mechanisms produced by other methods. We argue, in contrast, that evidence of mechanisms needs to be viewed as complementary to, rather than inferior to, evidence of correlation. In this paper we first set out the case for treating evidence of mechanisms alongside evidence of correlation in (...) explicit protocols for evaluating evidence. Next we provide case studies which exemplify the ways in which evidence of mechanisms complements evidence of correlation in practice. Finally, we put forward some general considerations as to how the two sorts of evidence can be more closely integrated by EBM. (shrink)
Bayesian nets are widely used in artificial intelligence as a calculus for causal reasoning, enabling machines to make predictions, perform diagnoses, take decisions and even to discover causal relationships. This book, aimed at researchers and graduate students in computer science, mathematics and philosophy, brings together two important research topics: how to automate reasoning in artificial intelligence, and the nature of causality and probability in philosophy.
Evidential Pluralism maintains that in order to establish a causal claim one normally needs to establish the existence of an appropriate conditional correlation and the existence of an appropriate mechanism complex, so when assessing a causal claim one ought to consider both association studies and mechanistic studies. Hitherto, Evidential Pluralism has been applied to medicine, leading to the EBM+ programme, which recommends that evidence-based medicine should systematically evaluate mechanistic studies alongside clinical studies. This paper argues that Evidential Pluralism can also (...) be fruitfully applied to the social sciences. In particular, Evidential Pluralism provides (i) a new approach to evidence-based policy; (ii) an account of the evidential relationships in more theoretical research; and (iii) new philosophical motivation for mixed methods research. The application of Evidential Pluralism to the social sciences is also defended against two objections. (shrink)
The use of evidence in medicine is something we should continuously seek to improve. This book seeks to develop our understanding of evidence of mechanism in evaluating evidence in medicine, public health, and social care; and also offers tools to help implement improved assessment of evidence of mechanism in practice. In this way, the book offers a bridge between more theoretical and conceptual insights and worries about evidence of mechanism and practical means to fit the results into evidence assessment procedures.
Russo and Williamson put forward the following thesis: in order to establish a causal claim in medicine, one normally needs to establish both that the putative cause and putative effect are appropriately correlated and that there is some underlying mechanism that can account for this correlation. I argue that, although the Russo-Williamson thesis conflicts with the tenets of present-day evidence-based medicine, it offers a better causal epistemology than that provided by present-day EBM because it better explains two key aspects of (...) causal discovery. First, the thesis better explains the role of clinical studies in establishing causal claims. Second, it yields a better account of extrapolation. (shrink)
We argue that David Lewis’s principal principle implies a version of the principle of indifference. The same is true for similar principles that need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. 1 The Argument2 Some Objections Met.
Mechanistic philosophy of science views a large part of scientific activity as engaged in modelling mechanisms. While science textbooks tend to offer qualitative models of mechanisms, there is increasing demand for models from which one can draw quantitative predictions and explanations. Casini et al. (Theoria 26(1):5–33, 2011) put forward the Recursive Bayesian Networks (RBN) formalism as well suited to this end. The RBN formalism is an extension of the standard Bayesian net formalism, an extension that allows for modelling the hierarchical (...) nature of mechanisms. Like the standard Bayesian net formalism, it models causal relationships using directed acyclic graphs. Given this appeal to acyclicity, causal cycles pose a prima facie problem for the RBN approach. This paper argues that the problem is a significant one given the ubiquity of causal cycles in mechanisms, but that the problem can be solved by combining two sorts of solution strategy in a judicious way. (shrink)
According to current hierarchies of evidence for EBM, evidence of correlation is always more important than evidence of mechanisms when evaluating and establishing causal claims. We argue that evidence of mechanisms needs to be treated alongside evidence of correlation. This is for three reasons. First, correlation is always a fallible indicator of causation, subject in particular to the problem of confounding; evidence of mechanisms can in some cases be more important than evidence of correlation when assessing a causal claim. Second, (...) evidence of mechanisms is often required in order to obtain evidence of correlation. Third, evidence of mechanisms is often required in order to generalise and apply causal claims. While the EBM movement has been enormously successful in making explicit and critically examining one aspect of our evidential practice, i.e., evidence of correlation, we wish to extend this line of work to make explicit and critically examine a second aspect of our evidential practices: evidence of mechanisms. (shrink)
Causal claims in biomedical contexts are ubiquitous albeit they are not always made explicit. This paper addresses the question of what causal claims mean in the context of disease. It is argued that in medical contexts causality ought to be interpreted according to the epistemic theory. The epistemic theory offers an alternative to traditional accounts that cash out causation either in terms of “difference-making” relations or in terms of mechanisms. According to the epistemic approach, causal claims tell us about which (...) inferences (e.g., diagnoses and prognoses) are appropriate, rather than about the presence of some physical causal relation analogous to distance or gravitational attraction. It is shown that the epistemic theory has important consequences for medical practice, in particular with regard to evidence-based causal assessment. (shrink)
Logic is a field studied mainly by researchers and students of philosophy, mathematics and computing. Inductive logic seeks to determine the extent to which the premises of an argument entail its conclusion, aiming to provide a theory of how one should reason in the face of uncertainty. It has applications to decision making and artificial intelligence, as well as how scientists should reason when not in possession of the full facts. In this work, Jon Williamson embarks on a quest to (...) find a general, reasonable, applicable inductive logic (GRAIL), all the while examining why pioneers such as Ludwig Wittgenstein and Rudolf Carnap did not entirely succeed in this task. (shrink)
In this paper, we compare the mechanisms of protein synthesis and natural selection. We identify three core elements of mechanistic explanation: functional individuation, hierarchical nestedness or decomposition, and organization. These are now well understood elements of mechanistic explanation in fields such as protein synthesis, and widely accepted in the mechanisms literature. But Skipper and Millstein have argued that natural selection is neither decomposable nor organized. This would mean that much of the current mechanisms literature does not apply to the mechanism (...) of natural selection.We take each element of mechanistic explanation in turn. Having appreciated the importance of functional individuation, we show how decomposition and organization should be better understood in these terms. We thereby show that mechanistic explanation by protein synthesis and natural selection are more closely analogous than they appear—both possess all three of these core elements of a mechanism widely recognized in the mechanisms literature. (shrink)
While there are several arguments on either side, it is far from clear as to whether or not countable additivity is an acceptable axiom of subjective probability. I focus here on de Finetti's central argument against countable additivity and provide a new Dutch book proof of the principle, To argue that if we accept the Dutch book foundations of subjective probability, countable additivity is an unavoidable constraint.
The epistemic theory of causality is analogous to epistemic theories of probability. Most proponents of epistemic probability would argue that one's degrees of belief should be calibrated to chances, insofar as one has evidence of chances. The question arises as to whether causal beliefs should satisfy an analogous calibration norm. In this paper, I formulate a particular version of a norm requiring calibration to chances and argue that this norm is the most fundamental evidential norm for epistemic probability. I then (...) develop an analogous calibration norm for epistemic causality, argue that it is the *only* evidential norm required for epistemic causality, and show how an epistemic account of causality that incorporates this norm can be used to analyse objective causal relationships. (shrink)
Why do ideas of how mechanisms relate to causality and probability differ so much across the sciences? Can progress in understanding the tools of causal inference in some sciences lead to progress in others? This book tackles these questions and others concerning the use of causality in the sciences.
How should we reason with causal relationships? Much recent work on this question has been devoted to the theses (i) that Bayesian nets provide a calculus for causal reasoning and (ii) that we can learn causal relationships by the automated learning of Bayesian nets from observational data. The aim of this book is to..
:A normative Bayesian theory of deliberation and judgement requires a procedure for merging the evidence of a collection of agents. In order to provide such a procedure, one needs to ask what the evidence is that grounds Bayesian probabilities. After finding fault with several views on the nature of evidence, it is argued that evidence is whatever is rationally taken for granted. This view is shown to have consequences for an account of merging evidence, and it is argued that standard (...) axioms for merging need to be altered somewhat. (shrink)
This paper addresses questions about how the levels of causality (generic and single-case causality) are related. One question is epistemological: can relationships at one level be evidence for relationships at the other level? We present three kinds of answer to this question, categorised according to whether inference is top-down, bottom-up, or the levels are independent. A second question is metaphysical: can relationships at one level be reduced to relationships at the other level? We present three kinds of answer to this (...) second question, categorised according to whether single-case relations are reduced to generic, generic relations are reduced to single-case, or the levels are independent. We then explore causal inference in autopsy. This is an interesting case study, we argue, because it refutes all three epistemologies and all three metaphysics. We close by sketching an account of causality that survives autopsy—the epistemic theory. (shrink)
This paper highlights the role of Lewis’ Principal Principle and certain auxiliary conditions on admissibility as serving to explicate normal informal standards of what is reasonable. These considerations motivate the presuppositions of the argument that the Principal Principle implies the Principle of Indifference, put forward by Hawthorne et al.. They also suggest a line of response to recent criticisms of that argument, due to Pettigrew and Titelbaum and Hart, 621–632, 2020). The paper also shows that related concerns of Hart and (...) Titelbaum, 252–262, 2015) do not undermine the argument of Hawthorne et al.. (shrink)
This paper poses a problem for Lewis’ Principal Principle in a subjective Bayesian framework: we show that, where chances inform degrees of belief, subjective Bayesianism fails to validate normal informal standards of what is reasonable. This problem points to a tension between the Principal Principle and the claim that conditional degrees of belief are conditional probabilities. However, one version of objective Bayesianism has a straightforward resolution to this problem, because it avoids this latter claim. The problem, then, offers some support (...) to this version of objective Bayesianism. (shrink)
Mechanisms have become much-discussed, yet there is still no consensus on how to characterise them. In this paper, we start with something everyone is agreed on – that mechanisms explain – and investigate what constraints this imposes on our metaphysics of mechanisms. We examine two widely shared premises about how to understand mechanistic explanation: (1) that mechanistic explanation offers a welcome alternative to traditional laws-based explanation and (2) that there are two senses of mechanistic explanation that we call ‘epistemic explanation’ (...) and ‘physical explanation’. We argue that mechanistic explanation requires that mechanisms are both real and local. We then go on to argue that real, local mechanisms require a broadly active metaphysics for mechanisms, such as a capacities metaphysics. (shrink)
This paper poses a problem for Lewis’ Principal Principle in a subjective Bayesian framework: we show that, where chances inform degrees of belief, subjective Bayesianism fails to validate normal informal standards of what is reasonable. This problem points to a tension between the Principal Principle and the claim that conditional degrees of belief are conditional probabilities. However, one version of objective Bayesianism has a straightforward resolution to this problem, because it avoids this latter claim. The problem, then, offers some support (...) to this version of objective Bayesianism. (shrink)
Objective Bayesianism has been criticised on the grounds that objective Bayesian updating, which on a finite outcome space appeals to the maximum entropy principle, differs from Bayesian conditionalisation. The main task of this paper is to show that this objection backfires: the difference between the two forms of updating reflects negatively on Bayesian conditionalisation rather than on objective Bayesian updating. The paper also reviews some existing criticisms and justifications of conditionalisation, arguing in particular that the diachronic Dutch book justification fails (...) because diachronic Dutch book arguments are subject to a reductio: in certain circumstances one can Dutch book an agent however she changes her degrees of belief . One may also criticise objective Bayesianism on the grounds that its norms are not compulsory but voluntary, the result of a stance. It is argued that this second objection also misses the mark, since objective Bayesian norms are tied up in the very notion of degrees of belief. (shrink)
It is tempting to analyse causality in terms of just one of the indicators of causal relationships, e.g., mechanisms, probabilistic dependencies or independencies, counterfactual conditionals or agency considerations. While such an analysis will surely shed light on some aspect of our concept of cause, it will fail to capture the whole, rather multifarious, notion. So one might instead plump for pluralism: a different analysis for a different occasion. But we do not seem to have lots of different concepts of cause (...) just one eclectic notion. The resolution of this conundrum, I think, requires us to accept that our causal beliefs are generated by a wide variety of indicators, but to deny that this variety of indicators yields a variety of concepts of cause. This focus on the relation between evidence and causal beliefs leads to what I call epistemic causality. Under this view, certain causal beliefs are appropriate or rational on the basis of observed evidence; our notion of cause can be understood purely in terms of these rational beliefs. Causality, then, is a feature of our epistemic representation of the world, rather than of the world itself. This yields one, multifaceted notion of cause. (shrink)
The Recursive Bayesian Net (RBN) formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations is vital for prediction, explanation and control respectively, an RBN can be applied to all these tasks. We show in particular (...) how a simple two-level RBN can be used to model a mechanism in cancer science. The higher level of our model contains variables at the clinical level, while the lower level maps the structure of the cell's mechanism for apoptosis. (shrink)
In this paper, we examine what is to be said in defence of Machamer, Darden and Craver’s (MDC) controversial dualism about activities and entities (Machamer, Darden and Craver’s in Philos Sci 67:1–25, 2000). We explain why we believe the notion of an activity to be a novel, valuable one, and set about clearing away some initial objections that can lead to its being brushed aside unexamined. We argue that substantive debate about ontology can only be effective when desiderata for an (...) ontology are explicitly articulated. We distinguish three such desiderata. The first is a more permissive descriptive ontology of science, the second a more reductive ontology prioritising understanding, and the third a more reductive ontology prioritising minimalism. We compare MDC’s entities-activities ontology to its closest rival, the entities-capacities ontology, and argue that the entities-activities ontology does better on all three desiderata. (shrink)
Objective Bayesian epistemology invokes three norms: the strengths of our beliefs should be probabilities, they should be calibrated to our evidence of physical probabilities, and they should otherwise equivocate sufficiently between the basic propositions that we can express. The three norms are sometimes explicated by appealing to the maximum entropy principle, which says that a belief function should be a probability function, from all those that are calibrated to evidence, that has maximum entropy. However, the three norms of objective Bayesianism (...) are usually justified in different ways. In this paper we show that the three norms can all be subsumed under a single justification in terms of minimising worst-case expected loss. This, in turn, is equivalent to maximising a generalised notion of entropy. We suggest that requiring language invariance, in addition to minimising worst-case expected loss, motivates maximisation of standard entropy as opposed to maximisation of other instances of generalised entropy. Our argument also provides a qualified justification for updating degrees of belief by Bayesian conditionalisation. However, conditional probabilities play a less central part in the objective Bayesian account than they do under the subjective view of Bayesianism, leading to a reduced role for Bayes’ Theorem. (shrink)
The Recursive Bayesian Net formalism was originally developed for modelling nested causal relationships. In this paper we argue that the formalism can also be applied to modelling the hierarchical structure of mechanisms. The resulting network contains quantitative information about probabilities, as well as qualitative information about mechanistic structure and causal relations. Since information about probabilities, mechanisms and causal relations is vital for prediction, explanation and control respectively, an RBN can be applied to all these tasks. We show in particular how (...) a simple two-level RBN can be used to model a mechanism in cancer science. The higher level of our model contains variables at the clinical level, while the lower level maps the structure of the cell's mechanism for apoptosis. (shrink)
The mechanistic and causal accounts of explanation are often conflated to yield a ‘causal-mechanical’ account. This paper prizes them apart and asks: if the mechanistic account is correct, how can causal explanations be explanatory? The answer to this question varies according to how causality itself is understood. It is argued that difference-making, mechanistic, dualist and inferentialist accounts of causality all struggle to yield explanatory causal explanations, but that an epistemic account of causality is more promising in this regard.
Kyburg goes half-way towards objective Bayesianism. He accepts that frequencies constrain rational belief to an interval but stops short of isolating an optimal degree of belief within this interval. I examine the case for going the whole hog.
The teratogenicity of the Zika virus was considered established in 2016, and is an interesting case because three different sets of causal criteria were used to assess teratogenicity. This paper appeals to the thesis of Russo and Williamson (2007) to devise an epistemological framework that can be used to compare and evaluate sets of causal criteria. The framework can also be used to decide when enough criteria are satisfied to establish causality. Arguably, the three sets of causal criteria considered here (...) offer only a rudimentary assessment of mechanistic studies, and some suggestions are made as to alternative ways to establish causality. (shrink)
This chapter provides an overview of a range of probabilistic theories of causality, including those of Reichenbach, Good and Suppes, and the contemporary causal net approach. It discusses two key problems for probabilistic accounts: counterexamples to these theories and their failure to account for the relationship between causality and mechanisms. It is argued that to overcome the problems, an epistemic theory of causality is required.
This paper presents a new argument for the Principle of Indifference. This argument can be thought of in two ways: as a pragmatic argument, justifying the principle as needing to hold if one is to minimise worst-case expected loss, or as an epistemic argument, justifying the principle as needing to hold in order to minimise worst-case expected inaccuracy. The question arises as to which interpretation is preferable. I show that the epistemic argument contradicts Evidentialism and suggest that the relative plausibility (...) of Evidentialism provides grounds to prefer the pragmatic interpretation. If this is right, it extends to a general preference for pragmatic arguments for the Principle of Indifference, and also to a general preference for pragmatic arguments for other norms of Bayesian epistemology. (shrink)
Part I of this paper introduces a range of mechanistic theories of causality, including process theories and the complex-systems theories, and some of the problems they face. Part II argues that while there is a decisive case against a purely mechanistic analysis, a viable theory of causality must incorporate mechanisms as an ingredient, and describes one way of providing an analysis of causality which reaps the rewards of the mechanistic approach without succumbing to its pitfalls.
The orthodox view in statistics has it that frequentism and Bayesianism are diametrically opposed—two totally incompatible takes on the problem of statistical inference. This paper argues to the contrary that the two approaches are complementary and need to mesh if probabilistic reasoning is to be carried out correctly.
According to Russo and Williamson (Int Stud Philos Sci 21(2):157–170, 2007, Hist Philos Life Sci 33:389–396, 2011a, Philos Sci 1(1):47–69, 2011b ), in order to establish a causal claim of the form, ‘_C_ is a cause of _E_’, one typically needs evidence that there is an underlying mechanism between _C_ and _E_ as well as evidence that _C_ makes a difference to _E_. This thesis has been used to argue that hierarchies of evidence, as championed by evidence-based movements, tend to (...) give primacy to evidence of difference making over evidence of mechanisms and are flawed because the two sorts of evidence are required and they should be treated on a par. An alternative approach gives primacy to evidence of mechanism over evidence of difference making. In this paper, we argue that this alternative approach is equally flawed, again because both sorts of evidence need to be treated on a par. As an illustration of this parity, we explain how scientists working in the ‘EnviroGenomarkers’ project constantly make use of the two evidential components in a dynamic and intertwined way. We argue that such an interplay is needed not only for causal assessment but also for policy purposes. (shrink)
The maximum entropy principle is widely used to determine non-committal probabilities on a finite domain, subject to a set of constraints, but its application to continuous domains is notoriously problematic. This paper concerns an intermediate case, where the domain is a first-order predicate language. Two strategies have been put forward for applying the maximum entropy principle on such a domain: applying it to finite sublanguages and taking the pointwise limit of the resulting probabilities as the size n of the sublanguage (...) increases; selecting a probability function on the language as a whole whose entropy on finite sublanguages of size n is not dominated by that of any other probability function for sufficiently large n. The entropy-limit conjecture says that, where these two approaches yield determinate probabilities, the two methods yield the same probabilities. If this conjecture is found to be true, it would provide a boost to the project of seeking a single canonical inductive logic—a project which faltered when Carnap's attempts in this direction succeeded only in determining a continuum of inductive methods. The truth of the conjecture would also boost the project of providing a canonical characterisation of normal or default models of first-order theories. Hitherto, the entropy-limit conjecture has been verified for languages which contain only unary predicate symbols and also for the case in which the constraints can be captured by a categorical statement of quantifier complexity. This paper shows that the entropy-limit conjecture also holds for categorical statements of complexity, for various non-categorical constraints, and in certain other general situations. (shrink)
In this chapter we draw connections between two seemingly opposing approaches to probability and statistics: evidential probability on the one hand and objective Bayesian epistemology on the other.
I put forward several desiderata that a philosophical theory of causality should satisfy: it should account for the objectivity of causality, it should underpin formalisms for causal reasoning, it should admit a viable epistemology, it should be able to cope with the great variety of causal claims that are made, and it should be ontologically parsimonious. I argue that Nancy Cartwright’s dispositional account of causality goes part way towards meeting these criteria but is lacking in important respects. I go on (...) to argue that my epistemic account, which ties causal relationships to an agent’s knowledge and ignorance, performs well in the light of the desiderata. Such an account, I claim, is all we require from a theory of causality. (shrink)
In this chapter we explore the process of extrapolating causal claims from model organisms to humans in pharmacology. We describe and compare four strategies of extrapolation: enumerative induction, comparative process tracing, phylogenetic reasoning, and robustness reasoning. We argue that evidence of mechanisms plays a crucial role in several strategies for extrapolation and in the underlying logic of extrapolation: the more directly a strategy establishes mechanistic similarities between a model and humans, the more reliable the extrapolation. We present case studies from (...) the research on atherosclerosis and the development of statins, that illustrate these strategies and the role of mechanistic evidence in extrapolation. (shrink)
When a proposition is established, it can be taken as evidence for other propositions. Can the Bayesian theory of rational belief and action provide an account of establishing? I argue that it can, but only if the Bayesian is willing to endorse objective constraints on both probabilities and utilities, and willing to deny that it is rationally permissible to defer wholesale to expert opinion. I develop a new account of deference that accommodates this latter requirement.
After introducing a range of mechanistic theories of causality and some of the problems they face, I argue that while there is a decisive case against a purely mechanistic analysis, a viable theory of causality must incorporate mechanisms as an ingredient. I describe one way of providing an analysis of causality which reaps the rewards of the mechanistic approach without succumbing to its pitfalls.
Objective Bayesianism says that the strengths of one’s beliefs ought to be probabilities, calibrated to physical probabilities insofar as one has evidence of them, and otherwise sufficiently equivocal. These norms of belief are often explicated using the maximum entropy principle. In this paper we investigate the extent to which one can provide a unified justification of the objective Bayesian norms in the case in which the background language is a first-order predicate language, with a view to applying the resulting formalism (...) to inductive logic. We show that the maximum entropy principle can be motivated largely in terms of minimising worst-case expected loss. (shrink)
Practical reasoning requires decision—making in the face of uncertainty. Xenelda has just left to go to work when she hears a burglar alarm. She doesn’t know whether it is hers but remembers that she left a window slightly open. Should she be worried? Her house may not be being burgled, since the wind or a power cut may have set the burglar alarm off, and even if it isn’t her alarm sounding she might conceivably be being burgled. Thus Xenelda can (...) not be certain that her house is being burgled, and the decision that she takes must be based on her degree of certainty, together with the possible outcomes of that decision. (shrink)
This chapter presents an overview of the major interpretations of probability followed by an outline of the objective Bayesian interpretation and a discussion of the key challenges it faces. I discuss the ramifications of interpretations of probability and objective Bayesianism for the philosophy of mathematics in general.