Isaac Newton's Scientific Method examines Newton's argument for universal gravity and his application of it to resolve the problem of deciding between geocentric and heliocentric world systems by measuring masses of the sun and planets. William L. Harper suggests that Newton's inferences from phenomena realize an ideal of empirical success that is richer than prediction. Any theory that can achieve this rich sort of empirical success must not only be able to predict the phenomena it purports to explain, but also (...) have those phenomena accurately measure the parameters which explain them. Harper explores the ways in which Newton's method aims to turn theoretical questions into ones which can be answered empirically by measurement from phenomena, and to establish that propositions inferred from phenomena are provisionally accepted as guides to further research. This methodology, guided by its rich ideal of empirical success, supports a conception of scientific progress that does not require construing it as progress toward Laplace's ideal limit of a final theory of everything, and is not threatened by the classic argument against convergent realism. Newton's method endorses the radical theoretical transformation from his theory to Einstein's. Harper argues that it is strikingly realized in the development and application of testing frameworks for relativistic theories of gravity, and very much at work in cosmology today. (shrink)
This is a very important book. It has already become required reading for researchers on the relation between the exact sciences and Kant’s philosophy. The main theme is that Kant’s continuing program to find a metaphysics that could provide a foundation for the science of his day is of crucial importance to understanding the development of his philosophical thought from its earliest precritical beginnings in the thesis of 1747, right through the highwater years of the critical philosophy, to his last (...) unpublished writings in the Opus postumum. In the course of articulating this theme, Friedman has made extensive use of detailed historical information about their scientific and mathematical background to illuminate Kant’s texts. Over and over again, such information is used to suggest interesting and quite subtle interpretations for texts that may have seemed puzzling or just wrong-headed. (shrink)
This paper uses Popper's treatment of probability and an epistemic constraint on probability assignments to conditionals to extend the Bayesian representation of rational belief so that revision of previously accepted evidence is allowed for. Results of this extension include an epistemic semantics for Lewis' theory of counterfactual conditionals and a representation for one kind of conceptual change.
The Akaike Information Criterion can be a valuable tool of scientific inference. This statistic, or any other statistical method for that matter, cannot, however, be the whole of scientific methodology. In this paper some of the limitations of Akaikean statistical methods are discussed. It is argued that the full import of empirical evidence is realized only by adopting a richer ideal of empirical success than predictive accuracy, and that the ability of a theory to turn phenomena into accurate, agreeing measurements (...) of causally relevant parameters contributes to the evidential support of the theory. This is illustrated by Newton's argument from orbital phenomena to the inverse-square law of gravitation. (shrink)
I take Newton's arguments to inverse square centripetal forces from Kepler's harmonic and areal laws to be classic deductions from phenomena. I argue that the theorems backing up these inferences establish systematic dependencies that make the phenomena carry the objective information that the propositions inferred from them hold. A review of the data supporting Kepler's laws indicates that these phenomena are Whewellian colligations-generalizations corresponding to the selection of a best fitting curve for an open-ended body of data. I argue that (...) the information theoretic features of Newton's corrections of the Keplerian phenomena to account for perturbations introduced by universal gravitation show that these corrections do not undercut the inferences from the Keplerian phenomena. Finally, I suggest that all of Newton's impressive applications of Universal gravitation to account for motion phenomena show an attempt to deliver explanations that share these salient features of his classic deductions from phenomena. (shrink)
Newton's methodology is significantly richer than the hypothetico-deductive model. It is informed by a richer ideal of empirical success that requires not just accurate prediction but also accurate measurement of parameters by the predicted phenomena. It accepts theory-mediated measurements and theoretical propositions as guides to research. All of these enrichments are exemplified in the classical response to Mercury's perihelion problem. Contrary to Kuhn, Newton's method endorses the radical transition from his theory to Einstein's. The richer themes of Newton's method are (...) strikingly realized in a challenge to general relativity from a new problem posed by Mercury's perihelion. †To contact the author, please write to: Talbot College, University of Western Ontario, London, Ontario, Canada N6A 3K7; e-mail: [email protected] (shrink)
Recent advances in philosophy, artificial intelligence, mathematical psychology, and the decision sciences have brought a renewed focus to the role and interpretation of probability in theories of uncertain reasoning. Henry E. Kyburg, Jr. has long resisted the now dominate Bayesian approach to the role of probability in scientific inference and practical decision. The sharp contrasts between the Bayesian approach and Kyburg's program offer a uniquely powerful framework within which to study several issues at the heart of scientific inference, decision, and (...) reasoning under uncertainty. The commissioned essays for this volume take measure of the scope and impact of Kyburg's views on probability and scientific inference, and include several new and important contributions to the field. Contributors: Gert de Cooman, Clark Glymour, William Harper, Isaac Levi, Ron Loui, Enrique Miranda, John Pollock, Teddy Seidenfeld, Choh Man Teng, Mariam Thalos, Gregory Wheeler, Jon Williamson, and Henry E. Kyburg, Jr. (shrink)
The Akaike Information Criterion can be a valuable tool of scientific inference. This statistic, or any other statistical method for that matter, cannot, however, be the whole of scientific methodology. In this paper some of the limitations of Akaikean statistical methods are discussed. It is argued that the full import of empirical evidence is realized only by adopting a richer ideal of empirical success than predictive accuracy, and that the ability of a theory to turn phenomena into accurate, agreeing measurements (...) of causally relevant parameters contributes to the evidential support of the theory. This is illustrated by Newton's argument from orbital phenomena to the inverse-square law of gravitation. (shrink)
In this paper, I consider the thesis advanced by Lawrence J. Schneiderman and Nancy S. Jecker that physicians should be forbidden from offering futile treatments to patients. I distinguish between a version of this thesis that is trivially true and Schneiderman and Jecker's more substantive version of the thesis. I find that their positive arguments for their thesis are unsuccessful, and sometimes quite misleading. I advance an argument against their thesis, and find that, on balance, their thesis should be rejected. (...) I briefly argue that a resolution of the debate about medical futility will require addressing deeper issues about value. (shrink)
This paper explores how the Bayesian program benefits from allowing for objective chance as well as subjective degree of belief. It applies David Lewis’s Principal Principle and David Christensen’s principle of informed preference to defend Howard Raiffa’s appeal to preferences between reference lotteries and scaling lotteries to represent degrees of belief. It goes on to outline the role of objective lotteries in an application of rationality axioms equivalent to the existence of a utility assignment to represent preferences in Savage’s famous (...) omelet example of a rational choice problem. An example motivating causal decision theory illustrates the need for representing subjunctive dependencies to do justice to intuitive examples where epistemic and causal independence come apart. We argue to extend Lewis’s account of chance as a guide to epistemic probability to include De Finetti’s convergence results. We explore Diachronic Dutch book arguments as illustrating commitments for treating transitions as learning experiences. Finally, we explore implications for Martingale convergence results for motivating commitment to objective chances. (shrink)
As van Fraassen pointed out in his opening remarks, Henry Kyburg's lottery paradox has long been known to raise difficulties in attempts to represent full belief as a probability greater than or equal to p, where p is some number less than 1. Recently, Patrick Maher has pointed out that to identify full belief with probability equal to 1 presents similar difficulties. In his paper, van Fraassen investigates ways of representing full belief by personal probability which avoid the difficulties raised (...) by Maher's measure-theoretic version of the lottery paradox. Van Fraassen's more subtle representation dissolves the simple identification of full belief with maximal personal probability. His investigation exploits the richer resources for representing opinion provided by taking conditional, rather than unconditional, personal probability as fundamental. It has interesting implications for equivalent alternative approaches based on non-Archimedean probability, as well as for equivalent approaches in which assumption contexts representing full belief relative to suppositions are taken as fundamental. (shrink)
Consider your right hand and a mirror image duplicate of it. Kant calls such pairs incongruent counterparts. According to him they have the following puzzling features. The relation and situation of the parts of your hand with respect to one another are not sufficient to distinguish it from its mirror duplicate. Nevertheless, there is a spatial difference between the two. Turn and twist them how you will, you cannot make one of them occupy the exact boundaries now occupied by the (...) other. In his 1768 paper, ‘Concerning the Ultimate Foundations of the Differentiation of Regions in Space’, Kant uses these claims to argue against relational accounts of space and goes on to argue that the difference between incongruent counterparts depends on a relation to absolute space as a whole. In his 1770 Inaugural Dissertation he argued that this difference could not be captured by concepts alone but required appeal to intuition. In the Prolegomena (1783) and again in the Metaphysical Foundations of Natural Science (1786) Kant appealed to these puzzling features of incongruent counterparts to support his transcendental idealism about space. (shrink)
Newton's methodology emphasized propositions "inferred from phenomena." These rest on systematic dependencies that make phenomena measure theoretical parameters. We consider the inferences supporting Newton's inductive argument that gravitation is proportional to inertial mass. We argue that the support provided by these systematic dependencies is much stronger than that provided by bootstrap confirmation; this kind of support thus avoids some of the major objections against bootstrapping. Finally we examine how contemporary testing of equivalence principles exemplifies this Newtonian methodological theme.
We argue that causal decision theory is no worse off than evidential decision theory in handling entanglement, regardless of one’s preferred interpretation of quantum mechanics. In recent works, Ahmed and Ahmed and Caulton : 4315–4352, 2014) have claimed the opposite; we argue that they are mistaken. Bell-type experiments are not instances of Newcomb problems, so CDT and EDT do not diverge in their recommendations. We highlight the fact that a Causal Decision Theorist should take all lawlike correlations into account, including (...) potentially acausal entanglement correlations. This paper also provides a brief introduction to CDT with a motivating “small” Newcomb problem. The main point of our argument is that quantum theory does not provide grounds for favouring EDT over CDT. (shrink)
We argue that causal decision theory is no worse off than evidential decision theory in handling entanglement, regardless of one’s preferred interpretation of quantum mechanics. In recent works, Ahmed and Ahmed and Caulton : 4315–4352, 2014) have claimed the opposite; we argue that they are mistaken. Bell-type experiments are not instances of Newcomb problems, so CDT and EDT do not diverge in their recommendations. We highlight the fact that a Causal Decision Theorist should take all lawlike correlations into account, including (...) potentially acausal entanglement correlations. This paper also provides a brief introduction to CDT with a motivating “small” Newcomb problem. The main point of our argument is that quantum theory does not provide grounds for favouring EDT over CDT. (shrink)