Traditional analyses of the curve fitting problem maintain that the data do not indicate what form the fitted curve should take. Rather, this issue is said to be settled by prior probabilities, by simplicity, or by a background theory. In this paper, we describe a result due to Akaike [1973], which shows how the data can underwrite an inference concerning the curve's form based on an estimate of how predictively accurate it will be. We argue that this approach throws light (...) on the theoretical virtues of parsimoniousness, unification, and non ad hocness, on the dispute about Bayesianism, and on empiricism and scientific realism. * Both of us gratefully acknowledge support from the Graduate School at the University of Wisconsin-Madison, and NSF grant DIR-8822278 (M.F.) and NSF grant SBE-9212294 (E.S.). Special thanks go to A. W. F. Edwards.William Harper. Martin Leckey. Brian Skyrms, and especially Peter Turney for helpful comments on an earlier draft. (shrink)
William Whewell’s philosophy of scientific discovery is applied to the problem of understanding the nature of unification and explanation by the composition of causes in Newtonian mechanics. The essay attempts to demonstrate: the sense in which ”approximate’ laws successfully refer to real physical systems rather than to idealizations of them; why good theoretical constructs are not badly underdetermined by observation; and why, in particular, Newtonian forces are not conventional and how empiricist arguments against the existence of component causes, and against (...) the veracity of the fundamental laws, are flawed. (shrink)
The central problem with Bayesian philosophy of science is that it cannot take account of the relevance of simplicity and unification to confirmation, induction, and scientific inference. The standard Bayesian folklore about factoring simplicity into the priors, and convergence theorems as a way of grounding their objectivity are some of the myths that Earman's book does not address adequately. 1Review of John Earman: Bayes or Bust?, Cambridge, MA. MIT Press, 1992, £33.75cloth.
The simple question, what is empirical success? turns out to have a surprisingly complicated answer. We need to distinguish between meritorious fit and ‘fudged fit', which is akin to the distinction between prediction and accommodation. The final proposal is that empirical success emerges in a theory dependent way from the agreement of independent measurements of theoretically postulated quantities. Implications for realism and Bayesianism are discussed. ‡This paper was written when I was a visiting fellow at the Center for Philosophy of (...) Science at the University of Pittsburgh; I thank everyone for their support. †To contact the author, please write to: Department of Philosophy, University of Wisconsin–Madison, 5185 Helen C. White Hall, 600 North Park Street, Madison, WI 53706; e-mail: [email protected] (shrink)
What has science actually achieved? A theory of achievement should define what has been achieved, describe the means or methods used in science, and explain how such methods lead to such achievements. Predictive accuracy is one truth‐related achievement of science, and there is an explanation of why common scientific practices tend to increase predictive accuracy. Akaike’s explanation for the success of AIC is limited to interpolative predictive accuracy. But therein lies the strength of the general framework, for it also provides (...) a clear formulation of many open problems of research. (shrink)
What has science actually achieved? A theory of achievement should define what has been achieved, describe the means or methods used in science, and explain how such methods lead to such achievements. Predictive accuracy is one truth-related achievement of science, and there is an explanation of why common scientific practices tend to increase predictive accuracy. Akaike's explanation for the success of AIC is limited to interpolative predictive accuracy. But therein lies the strength of the general framework, for it also provides (...) a clear formulation of many open problems of research. (shrink)
The likelihood theory of evidence (LTE) says, roughly, that all the information relevant to the bearing of data on hypotheses (or models) is contained in the likelihoods. There exist counterexamples in which one can tell which of two hypotheses is true from the full data, but not from the likelihoods alone. These examples suggest that some forms of scientific reasoning, such as the consilience of inductions (Whewell, 1858. In Novum organon renovatum (Part II of the 3rd ed.). The philosophy of (...) the inductive sciences. London: Cass, 1967), cannot be represented within Bayesian and Likelihoodist philosophies of science. (shrink)
The theory of fast and frugal heuristics, developed in a new book called Simple Heuristics that make Us Smart (Gigerenzer, Todd, and the ABC Research Group, in press), includes two requirements for rational decision making. One is that decision rules are bounded in their rationality –- that rules are frugal in what they take into account, and therefore fast in their operation. The second is that the rules are ecologically adapted to the environment, which means that they `fit to reality.' (...) The main purpose of this article is to apply these ideas to learning rules–-methods for constructing, selecting, or evaluating competing hypotheses in science, and to the methodology of machine learning, of which connectionist learning is a special case. The bad news is that ecological validity is particularly difficult to implement and difficult to understand. The good news is that it builds an important bridge from normative psychology and machine learning to recent work in the philosophy of science, which considers predictive accuracy to be a primary goal of science. (shrink)
Sober (1984) has considered the problem of determining the evidential support, in terms of likelihood, for a hypothesis that is incomplete in the sense of not providing a unique probability function over the event space in its domain. Causal hypotheses are typically like this because they do not specify the probability of their initial conditions. Sober's (1984) solution to this problem does not work, as will be shown by examining his own biological examples of common cause explanation. The proposed solution (...) will lead to the conclusion, contra Sober, that common cause hypotheses explain statistical correlations and not matchings between event tokens. (shrink)
Curve-fitting typically works by trading off goodness-of-fit with simplicity, where simplicity is measured by the number of adjustable parameters. However, such methods cannot be applied in an unrestricted way. I discuss one such correction, and explain why the exception arises. The same kind of probabilistic explanation offers a surprising resolution to a common-sense dilemma.
Classical mechanics is empirically successful because the probabilistic mean values of quantum mechanical observables follow the classical equations of motion to a good approximation (Messiah 1970, 215). We examine this claim for the one-dimensional motion of a particle in a box, and extend the idea by deriving a special case of the ideal gas law in terms of the mean value of a generalized force used to define "pressure." The examples illustrate the importance of probabilistic averaging as a method of (...) abstracting away from the messy details of microphenomena, not only in physics, but in other sciences as well. (shrink)
Van Fraassen has argued that quantum mechanics does not conform to the pattern of common cause explanation used by Salmon as a precise formulation of Smart's 'cosmic coincidence' argument for scientific realism. This paper adds to this list some common examples from classical physics that also do not conform to Salmon's explanatory schema. This is bad news and good news for the realist. The bad news is that Salmon's argument for realism does not work; the good news is that realism (...) need not demand hidden variables in quantum mechanics if they are not used in classical mechanics. Many correlations in physics are explained in terms of property identity (contra Salmon). This leads to a new argument against van Fraassen because the unified version of the theory obtained by identifying theoretical properties is always less empirically adequate. (shrink)
The paper provides a formal proof that efficient estimates of parameters, which vary as as little as possible when measurements are repeated, may be expected to provide more accurate predictions. The definition of predictive accuracy is motivated by the work of Akaike (1973). Surprisingly, the same explanation provides a novel solution for a well known problem for standard theories of scientific confirmation — the Ravens Paradox. This is significant in light of the fact that standard Bayesian analyses of the paradox (...) fail to account for the predictive utility of universal laws like All ravens are black. (shrink)
Skyrms's formulation of the argument against stochastic hidden variables in quantum mechanics using conditionals with chance consequences suffers from an ambiguity in its "conservation" assumption. The strong version, which Skyrms needs, packs in a "no-rapport" assumption in addition to the weaker statement of the "experimental facts." On the positive side, I argue that Skyrms's proof has two unnoted virtues (not shared by previous proofs): (1) it shows that certain difficulties that arise for deterministic hidden variable theories that exploit a nonclassical (...) probability theory extend to the stochastic case; (2) the use of counterfactual conditionals relates the Bell puzzle to Dummett's (1976) discussion of realism in quantum mechanics. (shrink)
This paper aims to show how Whewell's notions of consilience and unification-explicated in more modern probabilistic terms provide a satisfying treatment of cases of scientific discovery Which require the postulatioin component causes to explain complex events. The results of this analysis support the received view that the increased unification and generality of theories leads to greater testability, and confirmation if the observations are favorable. This solves a puzzle raised by Cartwright in How the Laws of Physics Lie about the nature (...) of explanation by the composition of causes. (shrink)
Textbooks in quantum mechanics frequently claim that quantum mechanics explains the success of classical mechanics because “the mean values [of quantum mechanical observables] follow the classical equations of motion to a good approximation,” while “the dimensions of the wave packet be small with respect to the characteristic dimensions of the problem.” The equations in question are Ehrenfest’s famous equations. We examine this case for the one-dimensional motion of a particle in a box, and extend the idea deriving a special case (...) of the ideal gas law in terms of the mean value of a generalized force, which has been used in statistical mechanics to define ‘pressure’. The example may be an important test case for recent philosophical theories about the relationship between micro-theories and macro-theories in science. (shrink)