Two of the most influential theories about scientific inference are inference to the best explanation and Bayesianism. How are they related? Bas van Fraassen has claimed that IBE and Bayesianism are incompatible rival theories, as any probabilistic version of IBE would violate Bayesian conditionalization. In response, several authors have defended the view that IBE is compatible with Bayesian updating. They claim that the explanatory considerations in IBE are taken into account by the Bayesian because the Bayesian either does or should (...) make use of them in assigning probabilities to hypotheses. I argue that van Fraassen has not succeeded in establishing that IBE and Bayesianism are incompatible, but that the existing compatibilist response is also not satisfactory. I suggest that a more promising approach to the problem is to investigate whether explanatory considerations are taken into account by a Bayesian who assigns priors and likelihoods on his or her own terms. In this case, IBE would emerge from the Bayesian account, rather than being used to constrain priors and likelihoods. I provide a detailed discussion of the case of how the Copernican and Ptolemaic theories explain retrograde motion, and suggest that one of the key explanatory considerations is the extent to which the explanation a theory provides depends on its core elements rather than on auxiliary hypotheses. I then suggest that this type of consideration is reflected in the Bayesian likelihood, given priors that a Bayesian might be inclined to adopt even without explicit guidance by IBE. The aim is to show that IBE and Bayesianism may be compatible, not because they can be amalgamated, but rather because they capture substantially similar epistemic considerations. 1 Introduction2 Preliminaries3 Inference to the Best Explanation4 Bayesianism5 The Incompatibilist View : Inference to the Best Explanation Contradicts Bayesianism5. 1 Criticism of the incompatibilist view6 Constraint - Based Compatibilism6. 1 Criticism of constraint - based compatibilism7 Emergent Compatibilism7. 1 Analysis of inference to the best explanation7. 1. 1 Inference to the best explanation on specific hypotheses7. 1. 2 Inference to the best explanation on general theories7. 1. 3 Copernicus versus Ptolemy7. 1. 4 Explanatory virtues7. 1. 5 Summary7. 2 Bayesian account8 Conclusion. (shrink)
Psychological studies show that the beliefs of two agents in a hypothesis can diverge even if both agents receive the same evidence. This phenomenon of belief polarisation is often explained by invoking biased assimilation of evidence, where the agents’ prior views about the hypothesis affect the way they process the evidence. We suggest, using a Bayesian model, that even if such influence is excluded, belief polarisation can still arise by another mechanism. This alternative mechanism involves differential weighting of the evidence (...) arising when agents have different initial views about the reliability of their sources of evidence. We provide a systematic exploration of the conditions for belief polarisation in Bayesian models which incorporate opinions about source reliability, and we discuss some implications of our findings for the psychological literature. (shrink)
There has been considerable puzzlement over how to respond to higher-order evidence. The existing dilemmas can be defused by adopting a ‘two-dimensional’ representation of doxastic attitudes which incorporates not only substantive uncertainty about which first-order state of affairs obtains but also the degree of conviction with which we hold the attitude. This makes it possible that in cases of higher-order evidence the evidence sometimes impacts primarily on our conviction, rather than our substantive uncertainty. I argue that such a two-dimensional representation (...) is naturally developed by making use of imprecise probabilities. (shrink)
Hierarchical Bayesian models (HBMs) provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher‐level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, particularly the idea that (...) higher‐level theories guide learning at lower levels. In addition, they help resolve certain issues for Bayesians, such as scientific preference for simplicity and the problem of new theories. *Received July 2009; revised October 2009. †To contact the authors, please write to: Leah Henderson, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 32D‐808, Cambridge, MA 02139; e‐mail: [email protected]. (shrink)
Hierarchical Bayesian models (HBMs) provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘para- digms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher-level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, particularly the idea (...) that higher-level theories guide learning at lower levels. In addition, they help resolve certain issues for Bayesians, such as scientific preference for simplicity and the problem of new theories. (shrink)
The no miracles argument is one of the main arguments for scientific realism. Recently it has been alleged that the no miracles argument is fundamentally flawed because it commits the base rate fallacy. The allegation is based on the idea that the appeal of the no miracles argument arises from inappropriate neglect of the base rate of approximate truth among the relevant population of theories. However, the base rate fallacy allegation relies on an assumption of random sampling of individuals from (...) the population which cannot be made in the case of the no miracles argument. Therefore the base rate fallacy objection to the no miracles argument fails. I distinguish between a “local” and a “global” form of the no miracles argument. The base rate fallacy objection has been leveled at the local version. I argue that the global argument plays a key role in supporting a base-rate-fallacy-free formulation of the local version of the argument. (shrink)
Hierarchical Bayesian models provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher-level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, particularly the idea that higher-level (...) theories guide learning at lower levels. In addition, they help resolve certain issues for Bayesians, such as scientific preference for simplicity and the problem of new theories. (shrink)
Jeff Bub has developed an information-theoretic interpretation of quantum mechanics on the basis of the programme to reaxiomatise the theory in terms of information-theoretic principles. According to the most recent version of the interpretation, reaxiomatisation can dissolve some of the demands for explanation traditionally associated with the task of providing an interpretation for the theory. The key idea is that the real lesson we should take away from quantum mechanics is that the ‘structure of in- formation’ is not what we (...) thought it was. In particular a feature of the new structure is intrinsic randomness of measurement, which allegedly dissolves a significant part of the measurement problem. I argue that it is difficult to find an appropriate argument to support the claim that measurement is intrinsically random in the relevant sense. (shrink)
It is commonly thought that there is some tension between the second law of thermodynam- ics and the time reversal invariance of the microdynamics. Recently, however, Jos Uffink has argued that the origin of time reversal non-invariance in thermodynamics is not in the second law. Uffink argues that the relationship between the second law and time reversal invariance depends on the formulation of the second law. He claims that a recent version of the second law due to Lieb and Yngvason (...) allows irreversible processes, yet is time reversal invariant. In this paper, I attempt to spell out the traditional argument for incompatibility between the second law and time reversal invariant dynamics, making the assumptions on which it depends explicit. I argue that this argument does not vary with different versions of the second law and can be formulated for Lieb and Yngvason's version as for other versions. Uffink's argument regarding time reversal invariance in Lieb and Yngvason is based on a certain symmetry of some of their axioms. However, these axioms do not constitute the full expression of the second law in their system. (shrink)
Shenker has claimed that Von Neumann's argument for identifying the quantum mechanical entropy with the Von Neumann entropy, S() = – ktr( log ), is invalid. Her claim rests on a misunderstanding of the idea of a quantum mechanical pure state. I demonstrate this, and provide a further explanation of Von Neumann's argument.
Gerhard Schurz claims to have a solution to Hume’s problem of induction based on results from machine-learning concerning meta-induction. His argument has two steps. The first is to establish a justification for following a certain meta-inductive strategy based on its predictive optimality. The second step is to show how this justification can be transferred to object-induction. I unpack the second step and fail to find a convincing argument supporting the transfer of justification from meta-induction to object-induction. My conclusion is that (...) the problem of induction has not yet been solved by appeal to meta-induction. (shrink)
Robert Fogelin has argued that in deep disagreements, resolution cannot be achieved by rational argumentation. In response, Richard Feldman has claimed that deep disagreements can be resolved in a similar way to more everyday disagreements. I argue that Feldman’s claim is based on a relatively superficial notion of “resolution” of a disagreement whereas the notion at stake in Fogelin’s argument is more substantive. Furthermore, I argue that Feldman’s reply is based on a particular reading of Fogelin’s argument. There is an (...) alternative reading, which takes the central concern to be the role of common ground in argumentation. Engaging with this version of Fogelin’s argument is also a worthwhile endeavour. (shrink)
Quantum computers use the quantum interference of different computational paths to enhance correct outcomes and suppress erroneous outcomes of computations. In effect, they follow the same logical paradigm as (multi-particle) interferometers. We show how most known quantum algorithms for factorising and counting, may be cast in this manner. Quantum searching is described as inducing a desired relative phase between two eigenvectors to yield constructive interference on the sought elements and destructive interference on the remaining terms.