This book is devoted to a different proposal--that the logical structure of the scientist's method should guarantee eventual arrival at the truth given the scientist's background assumptions.
Ockham’s razor is the characteristic scientific penchant for simpler, more testable, and more unified theories. Glymour’s early work on confirmation theory eloquently stressed the rhetorical plausibility of Ockham’s razor in scientific arguments. His subsequent, seminal research on causal discovery still concerns methods with a strong bias toward simpler causal models, and it also comes with a story about reliability—the methods are guaranteed to converge to true causal structure in the limit. However, there is a familiar gap between convergent reliability and (...) scientific rhetoric: convergence in the long run is compatible with any conclusion in the short run. For that reason, Carnap suggested that the proper sense of reliability for scientific inference should lie somewhere between short-run reliability and mere convergence in the limit. One natural such concept is straightest possible convergence to the truth, where straightness is explicated in terms of minimizing reversals of opinion and cycles of opinion prior to convergence. We close the gap between scientific rhetoric and scientific reliability by showing that Ockham’s razor is necessary for cycle-optimal convergence to the truth, and that patiently waiting for information to resolve conflicts among simplest hypotheses is necessary for reversal-optimal convergence to the truth. (shrink)
Explaining the connection, if any, between simplicity and truth is among the deepest problems facing the philosophy of science, statistics, and machine learning. Say that an efficient truth finding method minimizes worst case costs en route to converging to the true answer to a theory choice problem. Let the costs considered include the number of times a false answer is selected, the number of times opinion is reversed, and the times at which the reversals occur. It is demonstrated that (1) (...) always choosing the simplest theory compatible with experience, and (2) hanging onto it while it remains simplest, is both necessary and sufficient for efficiency. †To contact the author, please write to: Department of Philosophy, Carnegie Mellon University, Baker Hall 135, Pittsburgh, PA 15213-3890; e-mail: [email protected] (shrink)
Synchronic norms of theory choice, a traditional concern in scientific methodology, restrict the theories one can choose in light of given information. Diachronic norms of theory change, as studied in belief revision, restrict how one should change one’s current beliefs in light of new information. Learning norms concern how best to arrive at true beliefs. In this paper, we undertake to forge some rigorous logical relations between the three topics. Concerning, we explicate inductive truth conduciveness in terms of optimally direct (...) convergence to the truth, where optimal directness is explicated in terms of reversals and cycles of opinion prior to convergence. Concerning, we explicate Ockham’s razor and related principles of choice in terms of the information topology of the empirical problem context and show that the principles are necessary for reversal or cycle optimal convergence to the truth. Concerning, we weaken the standard principles of agm belief revision theory in intuitive ways that are also necessary for reversal or cycle optimal convergence. Then we show that some of our weakened principles of change entail corresponding principles of choice, completing the triangle of relations between,, and. (shrink)
I propose that empirical procedures, like computational procedures, are justified in terms of truth-finding efficiency. I contrast the idea with more standard philosophies of science and illustrate it by deriving Ockham's razor from the aim of minimizing dramatic changes of opinion en route to the truth.
Simplicity has long been recognized as an apparent mark of truth in science, but it is difficult to explain why simplicity should be accorded such weight. This chapter examines some standard, statistical explanations of the role of simplicity in scientific method and argues that none of them explains, without circularity, how a reliance on simplicity could be conducive to finding true models or theories. The discussion then turns to a less familiar approach that does explain, in a sense, the elusive (...) connection between simplicity and truth. The idea is that simplicity does not point at or reliably indicate the truth but, rather, keeps inquiry on the cognitively most direct path to the truth. (shrink)
Belief revision theory concerns methods for reformulating an agent's epistemic state when the agent's beliefs are refuted by new information. The usual guiding principle in the design of such methods is to preserve as much of the agent's epistemic state as possible when the state is revised. Learning theoretic research focuses, instead, on a learning method's reliability or ability to converge to true, informative beliefs over a wide range of possible environments. This paper bridges the two perspectives by assessing the (...) reliability of several proposed belief revision operators. Stringent conceptions of minimal change are shown to occasion a limitation called inductive amnesia: they can predict the future only if they cannot remember the past. Avoidance of inductive amnesia can therefore function as a plausible and hitherto unrecognized constraint on the design of belief revision operators. (shrink)
This paper places formal learning theory in a broader philosophical context and provides a glimpse of what the philosophy of induction looks like from a learning-theoretic point of view. Formal learning theory is compared with other standard approaches to the philosophy of induction. Thereafter, we present some results and examples indicating its unique character and philosophical interest, with special attention to its unified perspective on inductive uncertainty and uncomputability.
One construal of convergent realism is that for each clear question, scientific inquiry eventually answers it. In this paper we adapt the techniques of formal learning theory to determine in a precise manner the circumstances under which this ideal is achievable. In particular, we define two criteria of convergence to the truth on the basis of evidence. The first, which we call EA convergence, demands that the theorist converge to the complete truth "all at once". The second, which we call (...) AE convergence, demands only that for every sentence in the theorist's language, there is a time at which the theorist settles the status of the sentence. The relative difficulties of these criteria are compared for effective and ineffective agents. We then examine in detail how the enrichment of an agent's hypothesis language makes the task of converging to the truth more difficult. In particular, we parametrize first-order languages by predicate and function symbol arity, presence or absence of identity, and quantifier prefix complexity. For nearly each choice of values of these parameters, we determine the senses in which effective and ineffective agents can converge to the complete truth on an arbitrary structure for the language. Finally, we sketch directions in which our learning theoretic setting can be generalized or made more realistic. (shrink)
Philosophical logicians proposing theories of rational belief revision have had little to say about whether their proposals assist or impede the agent's ability to reliably arrive at the truth as his beliefs change through time. On the other hand, reliability is the central concern of formal learning theory. In this paper we investigate the belief revision theory of Alchourron, Gardenfors and Makinson from a learning theoretic point of view.
then essentially characterized the hypotheses that mechanical scientists can successfully decide in the limit in terms of arithmetic complexity. These ideas were developed still further by Peter Kugel [4]. In this paper, I extend this approach to obtain characterizations of identification in the limit, identification with bounded mind-changes, and identification in the short run, both for computers and for ideal agents with unbounded computational abilities. The characterization of identification with n mind-changes entails, as a corollary, an exact arithmetic characterization of (...) Putnam's n-trial predicates, which closes a gap of a factor of two in Putnam's original characterization [12]. (shrink)
I show that a version of Ockham’s razor (a preference for simple answers) is advantageous in both domains when infallible inference is infeasible. A familiar response to the empirical problem..
Over the past two decades, several consistent procedures have been designed to infer causal conclusions from observational data. We prove that if the true causal network might be an arbitrary, linear Gaussian network or a discrete Bayes network, then every unambiguous causal conclusion produced by a consistent method from non-experimental data is subject to reversal as the sample size increases any finite number of times. That result, called the causal flipping theorem, extends prior results to the effect that causal discovery (...) cannot be reliable on a given sample size. We argue that since repeated flipping of causal conclusions is unavoidable in principle for consistent methods, the best possible discovery methods are consistent methods that retract their earlier conclusions no more than necessary. A series of simulations of various methods across a wide range of sample sizes illustrates concretely both the theorem and the principle of comparing methods in terms of retractions. (shrink)
Synchronic norms of theory choice, a traditional concern in scientific methodology, restrict the theories one can choose in light of given information. Diachronic norms of theory change, as studied in belief revision, restrict how one should change one’s current beliefs in light of new information. Learning norms concern how best to arrive at true beliefs. In this paper, we undertake to forge some rigorous logical relations between the three topics. Concerning, we explicate inductive truth conduciveness in terms of optimally direct (...) convergence to the truth, where optimal directness is explicated in terms of reversals and cycles of opinion prior to convergence. Concerning, we explicate Ockham’s razor and related principles of choice in terms of the information topology of the empirical problem context and show that the principles are necessary for reversal or cycle optimal convergence to the truth. Concerning, we weaken the standard principles of agm belief revision theory in intuitive ways that are also necessary for reversal or cycle optimal convergence. Then we show that some of our weakened principles of change entail corresponding principles of choice, completing the triangle of relations between,, and. (shrink)
There is renewed interest in the logic of discovery as well as in the position that there is no reason for philosophers to bother with it. This essay shows that the traditional, philosophical arguments for the latter position are bankrupt. Moreover, no interesting defense of the philosophical irrelevance or impossibility of the logic of discovery can be formulated or defended in isolation from computation-theoretic considerations.
In this paper, we argue for the centrality of countable additivity to realist claims about the convergence of science to the truth. In particular, we show how classical sceptical arguments can be revived when countable additivity is dropped.
I have applied a fairly general, learning theoretic perspective to some questions raised by Reichenbach's positions on induction and discovery. This is appropriate in an examination of the significance of Reichenbach's work, since the learning-theoretic perspective is to some degree part of Reichenbach's reliabilist legacy. I have argued that Reichenbach's positivism and his infatuation with probabilities are both irrelevant to his views on induction, which are principally grounded in the notion of limiting reliability. I have suggested that limiting reliability is (...) still a formidable basis for the formulation of methodological norms, particularly when reliability cannot possibly be had in the short run, so that refined judgments about evidential support must depend upon measure-theoretic choices having nothing to do in the short run with the truth of the hypothesis under investigation. To illustrate the generality of Reichenbach's program, I showed how it can be applied to methods that aim to solve arbitrary assessment and discovery problems in various senses. In this generalized Reichenbachian setting, we can characterize the intrinsic complexity of reliable inductive inference in terms of topological complexity. Finally, I let Reichenbach's theory of induction have the last say about hypothetico-deductive method. (shrink)
Ockham’s razor is the principle that, all other things being equal, scientists ought to prefer simpler theories. In recent years, philosophers have argued that simpler theories make better predictions, possess theoretical virtues like explanatory power, and have other pragmatic virtues like computational tractability. However, such arguments fail to explain how and why a preference for simplicity can help one find true theories in scientific inquiry, unless one already assumes that the truth is simple. One new solution to that problem is (...) the Ockham efficiency theorem, which states that scientists who heed Ockham’s razor retract their opinions less often and sooner than do their non-Ockham competitors. The theorem neglects, however, to consider competitors following random strategies and in many applications random strategies are known to achieve better worst-case loss than deterministic strategies. In this paper, we describe two ways to extend the result to a very general class of random, empirical strategies. The first extension concerns expected retractions, retraction times, and errors and the second extension concerns retractions in chance, times of retractions in chance, and chances of errors. (shrink)
This chapter presents a new semantics for inductive empirical knowledge. The epistemic agent is represented concretely as a learner who processes new inputs through time and who forms new beliefs from those inputs by means of a concrete, computable learning program. The agent’s belief state is represented hyper-intensionally as a set of time-indexed sentences. Knowledge is interpreted as avoidance of error in the limit and as having converged to true belief from the present time onward. Familiar topics are re-examined within (...) the semantics, such as inductive skepticism, the logic of discovery, Duhem’s problem, the articulation of theories by auxiliary hypotheses, the role of serendipity in scientific knowledge, Fitch’s paradox, deductive closure of knowability, whether one can know inductively that one knows inductively, whether one can know inductively that one does not know inductively, and whether expert instruction can spread common inductive knowledge—as opposed to mere, true belief—through a community of gullible pupils. (shrink)
We argue that uncomputability and classical scepticism are both reflections of inductive underdetermination, so that Church's thesis and Hume's problem ought to receive equal emphasis in a balanced approach to the philosophy of induction. As an illustration of such an approach, we investigate how uncomputable the predictions of a hypothesis can be if the hypothesis is to be reliably investigated by a computable scientific method.
Convergent realists desire scientific methods that converge reliably to informative, true theories over a wide range of theoretical possibilities. Much attention has been paid to the problem of induction from quantifier-free data. In this paper, we employ the techniques of formal learning theory and model theory to explore the reliable inference of theories from data containing alternating quantifiers. We obtain a hierarchy of inductive problems depending on the quantifier prefix complexity of the formulas that constitute the data, and we provide (...) bounds relating the quantifier prefix complexity of the data to the quantifier prefix complexity of the theories that can be reliably inferred from such data without background knowledge. We also examine the question whether there are theories with mixed quantifiers that can be reliably inferred with closed, universal formulas in the data, but not without. (shrink)
We defend a set of acceptance rules that avoids the lottery paradox, that is closed under classical entailment, and that accepts uncertain propositions without ad hoc restrictions. We show that the rules we recommend provide a semantics that validates exactly Adams’ conditional logic and are exactly the rules that preserve a natural, logical structure over probabilistic credal states that we call probalogic. To motivate probalogic, we first expand classical logic to geologic, which fills the entire unit cube, and then we (...) project the upper surfaces of the geological cube onto the plane of probabilistic credal states by means of standard, linear perspective, which may be interpreted as an extension of the classical condition of indifference. Finally, we apply the geometrical/logical methods developed in the paper to prove a series of trivialization theorems against question-invariance as a constraint on acceptance rules and against rational monotonicity as an axiom of conditional logic in situations of uncertainty. (shrink)
Written by a Roman Catholic theologian, these essays cover how pastoral ministry should be done in today's society, including how to handle church law and build a collaborative church as well as specific issues such as euthanasia and embryo research.
This book is a tribute to Kevin Kelly, who has been one of the most influential British theologians for a number of decades. On its own merits, however, it is groundbreaking collection of essays on key themes, issues and concepts in contemporary moral theology and Christian ethics. The focus is on perspectives to inform moral debate and discernment in the future. The main themes covered are shown in the list of contents below. Several of the of the contributors are from (...) the United States, three others live and work in Continental Europe and the rest are from various parts of the British Isles. Many of the authors are among the best known in their fields on both sides of the Atlantic. (shrink)
As a result of his visit to Uganda, on behalf of the Catholic Fund for Overseas Development, theologian Kevin Kelly made the discovery that poverty and marginalization create windows of opportunity for the transmission of AIDS. In this book, Kelly brings together the whole of his thinking and experience as a teacher, moral theologian, and parish priest to challenge the thinking of the Church on sex and sexuality as moral issues for our time.
What does it mean to be a Christian in this day and age? How does this affect the way we relate to one another? In the face of so many different moral views, Kevin Kelly affirms the common ground behind them: the dignity of the human person. He looks at the relationship between experience and the development of morality, and highlights women's indispensable contribution. He also examines the place for morality in the Church's teaching.
This thesis examines the prospects for mechanical procedures that can identify true, complete, universal, first-order logical theories on the basis of a complete enumeration of true atomic sentences. A sense of identification is defined that is more general than those which are usually studied in the learning theoretic and inductive inference literature. Some identification algorithms based on confirmation relations familiar in the philosophy of science are presented. Each of these algorithms is shown to identify all purely universal theories without function (...) symbols. It is demonstrated that no procedure can solve this universal theory inference problem in the more usual senses of identification. The question of efficiency for theory inference systems is addressed, and some definitions of limiting complexity are examined. It is shown that several aspects of obvious strategies for solving the universal theory inference problem are NP-hard. Finally, some non-worst case heuristic search strategies are examined in light of these NP-completeness results. These strategies are based upon an isomorphism between clausal entailments of a certain class and partition lattices, and are applicable to the improvement of earlier work on language acquisition and logical inductive inference. (shrink)
Formal learning theory is an approach to the study of inductive inference that has been developed by computer scientists. In this paper, I discuss the relevance of formal learning theory to such standard topics in the philosophy of science as underdetermination, realism, scientific progress, methodology, bounded rationality, the problem of induction, the logic of discovery, the theory of knowledge, the philosophy of artificial intelligence, and the philosophy of psychology.
This paper develops a framework in which to compare the discovery problems determined by a wide range of distinct hypothesis languages. Twelve theorems are presented which provide a comprehensive picture of the solvability of these problems according to four intuitively motivated criteria of scientific success.