In everyday life we either express our beliefs in all-or-nothing terms or we resort to numerical probabilities: I believe it's going to rain or my chance of winning is one in a million. The Stability of Belief develops a theory of rational belief that allows us to reason with all-or-nothing belief and numerical belief simultaneously.
This essay develops a joint theory of rational (all-or-nothing) belief and degrees of belief. The theory is based on three assumptions: the logical closure of rational belief; the axioms of probability for rational degrees of belief; and the so-called Lockean thesis, in which the concepts of rational belief and rational degree of belief figure simultaneously. In spite of what is commonly believed, this essay will show that this combination of principles is satisfiable (and indeed nontrivially so) and that the principles (...) are jointly satisfied if and only if rational belief is equivalent to the assignment of a stably high rational degree of belief. Although the logical closure of belief and the Lockean thesis are attractive postulates in themselves, initially this may seem like a formal “curiosity”; however, as will be argued in the rest of the essay, a very reasonable theory of rational belief can be built around these principles that is not ad hoc and that has various philosophical features that are plausible independently. In particular, this essay shows that the theory allows for a solution to the Lottery Paradox, and it has nice applications to formal epistemology. The price that is to be paid for this theory is a strong dependency of belief on the context, where a context involves both the agent's degree of belief function and the partitioning or individuation of the underlying possibilities. But as this essay argues, that price seems to be affordable. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
This article introduces, studies, and applies a new system of logic which is called ‘HYPE’. In HYPE, formulas are evaluated at states that may exhibit truth value gaps and truth value gluts. Simple and natural semantic rules for negation and the conditional operator are formulated based on an incompatibility relation and a partial fusion operation on states. The semantics is worked out in formal and philosophical detail, and a sound and complete axiomatization is provided both for the propositional and the (...) predicate logic of the system. The propositional logic of HYPE is shown to contain first-degree entailment, to have the Finite Model Property, to be decidable, to have the Disjunction Property, and to extend intuitionistic propositional logic conservatively when intuitionistic negation is defined appropriately by HYPE’s logical connectives. Furthermore, HYPE’s first-order logic is a conservative extension of intuitionistic logic with the Constant Domain Axiom, when intuitionistic negation is again defined appropriately. The system allows for simple model constructions and intuitive Euler-Venn-like diagrams, and its logical structure matches structures well-known from ordinary mathematics, such as from optimization theory, combinatorics, and graph theory. HYPE may also be used as a general logical framework in which different systems of logic can be studied, compared, and combined. In particular, HYPE is found to relate in interesting ways to classical logic and various systems of relevance and paraconsistent logic, many-valued logic, and truthmaker semantics. On the philosophical side, if used as a logic for theories of type-free truth, HYPE is shown to address semantic paradoxes such as the Liar Paradox by extending non-classical fixed-point interpretations of truth by a conditional as well-behaved as that of intuitionistic logic. Finally, HYPE may be used as a background system for modal operators that create hyperintensional contexts, though the details of this application need to be left to follow-up work. (shrink)
Is it possible to give an explicit definition of belief in terms of subjective probability, such that believed propositions are guaranteed to have a sufficiently high probability, and yet it is neither the case that belief is stripped of any of its usual logical properties, nor is it the case that believed propositions are bound to have probability 1? We prove the answer is ‘yes’, and that given some plausible logical postulates on belief that involve a contextual “cautiousness” threshold, there (...) is but one way of determining the extension of the concept of belief that does the job. The qualitative concept of belief is not to be eliminated from scientific or philosophical discourse, rather, by reducing qualitative belief to assignments of resiliently high degrees of belief and a “cautiousness” threshold, qualitative and quantitative belief turn out to be governed by one unified theory that offers the prospects of a huge range of applications. Within that theory, logic and probability theory are not opposed to each other but go hand in hand. (shrink)
In discussions about whether the Principle of the Identity of Indiscernibles is compatible with structuralist ontologies of mathematics, it is usually assumed that individual objects are subject to criteria of identity which somehow account for the identity of the individuals. Much of this debate concerns structures that admit of non-trivial automorphisms. We consider cases from graph theory that violate even weak formulations of PII. We argue that (i) the identity or difference of places in a structure is not to be (...) accounted for by anything other than the structure itself and that (ii) mathematical practice provides evidence for this view. We want to thank Leon Horsten, Jeff Ketland, Øystein Linnebo, John Mayberry, Richard Pettigrew, and Philip Welch for valuable comments on drafts of this paper. We are especially grateful to Fraser MacBride for correcting our interpretation of two of his papers and for other helpful comments. CiteULike Connotea Del.icio.us What's this? (shrink)
What kinds of sentences with truth predicate may be inserted plausibly and consistently into the T-scheme? We state an answer in terms of dependence: those sentences which depend directly or indirectly on non-semantic states of affairs (only). In order to make this precise we introduce a theory of dependence according to which a sentence φ is said to depend on a set Φ of sentences iff the truth value of φ supervenes on the presence or absence of the sentences of (...) Φ in/from the extension of the truth predicate. Both φ and the members of Φ are allowed to contain the truth predicate. On that basis we are able define notions such as ungroundedness or self-referentiality within a classical semantics, and we can show that there is an adequate definition of truth for the class of sentences which depend on non-semantic states of affairs. (shrink)
This is part B of a paper in which we defend a semantics for counterfactuals which is probabilistic in the sense that the truth condition for counterfactuals refers to a probability measure. Because of its probabilistic nature, it allows a counterfactual to be true even in the presence of relevant -worlds, as long such exceptions are not too widely spread. The semantics is made precise and studied in different versions which are related to each other by representation theorems. Despite its (...) probabilistic nature, we show that the semantics and the resulting system of logic may be regarded as a naturalistically vindicated variant of David Lewis work. We argue that counterfactuals have two kinds of pragmatic meanings and come attached with two types of degrees of acceptability or belief, one being suppositional, the other one being truth based as determined by our probabilistic semantics; these degrees could not always coincide due to a new triviality result for counterfactuals, and they should not be identified in the light of their different interpretation and pragmatic purpose. However, for plain assertability the difference between them does not matter. Hence, if the suppositional theory of counterfactuals is formulated with sufficient care, our truth-conditional theory of counterfactuals is consistent with it. The results of our investigation are used to assess a claim considered by Hawthorne and Hájek, that is, the thesis that most ordinary counterfactuals are false. (shrink)
If an agent believes that the probability of E being true is 1/2, should she accept a bet on E at even odds or better? Yes, but only given certain conditions. This paper is about what those conditions are. In particular, we think that there is a condition that has been overlooked so far in the literature. We discovered it in response to a paper by Hitchcock (2004) in which he argues for the 1/3 answer to the Sleeping Beauty problem. (...) Hitchcock argues that this credence follows from calculating her fair betting odds, plus the assumption that Sleeping Beauty’s credences should track her fair betting odds. We will show that this last assumption is false. Sleeping Beauty’s credences should not follow her fair betting odds due to a peculiar feature of her epistemic situation. (shrink)
This paper suggests a bridge principle for all-or-nothing belief and degrees of belief to the effect that belief corresponds to stably high degree of belief. Different ways of making this Humean thesis on belief precise are discussed, and one of them is shown to stand out by unifying the others. The resulting version of the thesis proves to be fruitful in entailing the logical closure of belief, the Lockean thesis on belief, and coherence between decision-making based on all-or-nothing beliefs and (...) on degrees of belief. (shrink)
Is it possible to maintain classical logic, stay close to classical semantics, and yet accept that language might be semantically indeterminate? The article gives an affirmative answer by Ramsifying classical semantics, which yields a new semantic theory that remains much closer to classical semantics than supervaluationism but which at the same time avoids the problematic classical presupposition of semantic determinacy. The resulting Ramsey semantics is developed in detail, it is shown to supply a classical concept of truth and to fully (...) support the rules and metarules of classical logic, and it is applied to vague terms as well as to theoretical or open-ended terms from mathematics and science. The theory also demonstrates how diachronic or synchronic interpretational continuity across languages is compatible with semantic indeterminacy. (shrink)
This is Part A of an article that defends non-eliminative structuralism about mathematics by means of a concrete case study: a theory of unlabeled graphs. Part A summarizes the general attractions of non-eliminative structuralism. Afterwards, it motivates an understanding of unlabeled graphs as structures sui generis and develops a corresponding axiomatic theory of unlabeled graphs. As the theory demonstrates, graph theory can be developed consistently without eliminating unlabeled graphs in favour of sets; and the usual structuralist criterion of identity can (...) be applied successfully in graph-theoretic proofs. Part B will turn to the philosophical interpretation and assessment of the theory. (shrink)
We investigate the research programme of dynamic doxastic logic (DDL) and analyze its underlying methodology. The Ramsey test for conditionals is used to characterize the logical and philosophical differences between two paradigmatic systems, AGM and KGM, which we develop and compare axiomatically and semantically. The importance of Gärdenfors’s impossibility result on the Ramsey test is highlighted by a comparison with Arrow’s impossibility result on social choice. We end with an outlook on the prospects and the future of DDL.
This is Part B of an article that defends non-eliminative structuralism about mathematics by means of a concrete case study: a theory of unlabeled graphs. Part A motivated an understanding of unlabeled graphs as structures sui generis and developed a corresponding axiomatic theory of unlabeled graphs. Part B turns to the philosophical interpretation and assessment of the theory: it points out how the theory avoids well-known problems concerning identity, objecthood, and reference that have been attributed to non-eliminative structuralism. The part (...) concludes by explaining how the theory relates to set theory, and what remains to be accomplished for non-eliminative structuralists. (shrink)
It is well known that aggregating the degree-of-belief functions of different subjects by linear pooling or averaging is subject to a commutativity dilemma: other than in trivial cases, conditionalizing the individual degree-of-belief functions on a piece of evidence E followed by linearly aggregating them does not yield the same result as rst aggregating them linearly and then conditionalizing the resulting social degree- of-belief function on E. In the present paper we suggest a novel way out of this dilemma: adapting the (...) method of update or learning such that linear pooling com- mutes with it. As it turns out, the resulting update scheme – imaging on the evidence – is well-known from areas such as the study of conditionals and cau- sal decision theory, and a formal result from which the required commutativity property is derivable was supplied already by Gärdenfors in a different con- text. We end up determining under which conditions imaging would seem to be right method of update, and under which conditions, therefore, group update would not be affected by the commutativity dilemma. (shrink)
This article suggests that scientific philosophy, especially mathematical philosophy, might be one important way of doing philosophy in the future. Along the way, the article distinguishes between different types of scientific philosophy; it mentions some of the scientific methods that can serve philosophers; it aims to undermine some worries about mathematical philosophy; and it tries to make clear why in certain cases the application of mathematical methods is necessary for philosophical progress.
The thesis defended in this article is that by uttering or publishing a great many declarative sentences in assertoric mode, one does not actually assert that their conjunction is true – one rather asserts that the vast majority of these sentences are true. Accordingly, the belief that is expressed thereby is the belief that the vast majority of these sentences are true. In the article, we make this proposal precise, we explain the context-dependency of belief that corresponds to it, we (...) point out why our everyday oral practice of single assertions is not affected by it, and we argue that the proposal leads to a way out of the Paradox of the Preface. (shrink)
This monograph provides a new account of justified inference as a cognitive process. In contrast to the prevailing tradition in epistemology, the focus is on low-level inferences, i.e., those inferences that we are usually not consciously aware of and that we share with the cat nearby which infers that the bird which she sees picking grains from the dirt, is able to fly. Presumably, such inferences are not generated by explicit logical reasoning, but logical methods can be used to describe (...) and analyze such inferences. Part 1 gives a purely system-theoretic explication of belief and inference. Part 2 adds a reliabilist theory of justification for inference, with a qualitative notion of reliability being employed. Part 3 recalls and extends various systems of deductive and nonmonotonic logic and thereby explains the semantics of absolute and high reliability. In Part 4 it is proven that qualitative neural networks are able to draw justified deductive and nonmonotonic inferences on the basis of distributed representations. This is derived from a soundness/completeness theorem with regard to cognitive semantics of nonmonotonic reasoning. The appendix extends the theory both logically and ontologically, and relates it to A. Goldman's reliability account of justified belief. This text will be of interest to epistemologists and logicians, to all computer scientists who work on nonmonotonic reasoning and neural networks, and to cognitive scientists. (shrink)
Rudolf Carnap's Der logische Aufbau der Welt (The Logical Structure of the World) is generally conceived of as being the failed manifesto of logical positivism. In this paper we will consider the following question: How much of the Aufbau can actually be saved? We will argue that there is an adaptation of the old system which satisfies many of the demands of the original programme. In order to defend this thesis, we have to show how a new 'Aufbau-like' programme may (...) solve or circumvent the problems that affected the original Aufbau project. In particular, we are going to focus on how a new system may address the well-known difficulties in Carnap's Aufbau concerning abstraction, dimensionality, and theoretical terms. (shrink)
It is well known that aggregating the degree-of-belief functions of different subjects by linear pooling or averaging is subject to a commutativity dilemma: other than in trivial cases, conditionalizing the individual degree-of-belief functions on a piece of evidence E followed by linearly aggregating them does not yield the same result as rst aggregating them linearly and then conditionalizing the resulting social degree- of-belief function on E. In the present paper we suggest a novel way out of this dilemma: adapting the (...) method of update or learning such that linear pooling com- mutes with it. As it turns out, the resulting update scheme – imaging on the evidence – is well-known from areas such as the study of conditionals and cau- sal decision theory, and a formal result from which the required commutativity property is derivable was supplied already by Gärdenfors in a different con- text. We end up determining under which conditions imaging would seem to be right method of update, and under which conditions, therefore, group update would not be affected by the commutativity dilemma. (shrink)
If □ is conceived as an operator, i.e., an expression that gives applied to a formula another formula, the expressive power of the language is severely restricted when compared to a language where □ is conceived as a predicate, i.e., an expression that yields a formula if it is applied to a term. This consideration favours the predicate approach. The predicate view, however, is threatened mainly by two problems: Some obvious predicate systems are inconsistent, and possible-worlds semantics for predicates of (...) sentences has not been developed very far. By introducing possible-worlds semantics for the language of arithmetic plus the unary predicate □, we tackle both problems. Given a frame (W, R) consisting of a set W of worlds and a binary relation R on W, we investigate whether we can interpret □ at every world in such a way that □ $\ulcorner A \ulcorner$ holds at a world ᵆ ∊ W if and only if A holds at every world $\upsilon$ ∊ W such that ᵆR $\upsilon$ . The arithmetical vocabulary is interpreted by the standard model at every world. Several 'paradoxes' (like Montague's Theorem, Gödel's Second Incompleteness Theorem, McGee's Theorem on the ω-inconsistency of certain truth theories, etc.) show that many frames, e.g., reflexive frames, do not allow for such an interpretation. We present sufficient and necessary conditions for the existence of a suitable interpretation of □ at any world. Sound and complete semi-formal systems, corresponding to the modal systems K and K4, for the class of all possible-worlds models for predicates and all transitive possible-worlds models are presented. We apply our account also to nonstandard models of arithmetic and other languages than the language of arithmetic. (shrink)
We investigate the conditions under which quasianalysis, i.e., Carnap's method of abstraction in his Aufbau, yields adequate results. In particular, we state both necessary and sufficient conditions for the so-called faithfulness and fullness of quasianalysis, and analyze adequacy as the conjunction of faithfulness and fullness. It is shown that there is no method of (re-)constructing properties from similarity that delivers adequate results in all possible cases, if the same set of individuals is presupposed for properties and for similarity, and if (...) similarity is a relation of finite arity. The theory is applied to various examples, including Russell's construction of temporal instants and Carnap's constitution of the phenomenal counterparts to quality spheres. Our results explain why the former is adequate while the latter is bound to fail. (shrink)
Pure mathematical truths are commonly thought to be metaphysically necessary. Assuming the truth of pure mathematics as currently pursued, and presupposing that set theory serves as a foundation of pure mathematics, this article aims to provide a metaphysical explanation of why pure mathematics is metaphysically necessary.
This article explores ways in which the Revision Theory of Truth can be expressed in the object language. In particular, we investigate the extent to which semantic deficiency, stable truth, and nearly stable truth can be so expressed, and we study different axiomatic systems for the Revision Theory of Truth.
We argue that giving up on the closure of rational belief under conjunction comes with a substantial price. Either rational belief is closed under conjunction, or else the epistemology of belief has a serious diachronic deficit over and above the synchronic failures of conjunctive closure. The argument for this, which can be viewed as a sequel to the preface paradox, is called the ‘review paradox'; it is presented in four distinct, but closely related versions.
Hierarchical Bayesian models provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher-level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, particularly the idea that higher-level (...) theories guide learning at lower levels. In addition, they help resolve certain issues for Bayesians, such as scientific preference for simplicity and the problem of new theories. (shrink)
On the basis of impossibility results on probability, belief revision, and conditionals, it is argued that conditional beliefs differ from beliefs in conditionals qua mental states. Once this is established, it will be pointed out in what sense conditional beliefs are still conditional, even though they may lack conditional contents, and why it is permissible to still regard them as beliefs, although they are not beliefs in conditionals. Along the way, the main logical, dispositional, representational, and normative properties of conditional (...) beliefs are studied, and it is explained how the failure of not distinguishing conditional beliefs from beliefs in conditionals can lead philosophical and empirical theories astray. (shrink)
We present a way of classifying the logically possible ways out of Gärdenfors' inconsistency or triviality result on belief revision with conditionals. For one of these ways—conditionals which are not descriptive but which only have an inferential role as being given by the Ramsey test—we determine which of the assumptions in three different versions of Gärdenfors' theorem turn out to be false. This is done by constructing ranked models in which such Ramsey-test conditionals are evaluated and which are subject to (...) natural postulates on belief revision and acceptability sets for conditionals. Along the way we show that in contrast with what Gärdenfors himself proposed, there is no dichotomy of the form: either the Ramsey test has to be given up or the Preservation condition. Instead, both of them follow from our postulates. (shrink)
This papers deals with the class of axiomatic theories of truth for semantically closed languages, where the theories do not allow for standard models; i.e., those theories cannot be interpreted as referring to the natural number codes of sentences only (for an overview of axiomatic theories of truth in general, see Halbach[6]). We are going to give new proofs for two well-known results in this area, and we also prove a new theorem on the nonstandardness of a certain theory of (...) truth. The results indicate that the proof strategies for all the theorems on the nonstandardness of such theories are "essentially" of the same kind of structure. (shrink)
We investigate how to assign probabilities to sentences that contain a type-free truth predicate. These probability values track how often a sentence is satisfied in transfinite revision sequences, following Gupta and Belnap’s revision theory of truth. This answers an open problem by Leitgeb which asks how one might describe transfinite stages of the revision sequence using such probability functions. We offer a general construction, and explore additional constraints that lead to desirable properties of the resulting probability function. One such property (...) is Leitgeb’s Probabilistic Convention T, which says that the probability of φ equals the probability that φ is true. (shrink)
This is a personal, incomplete, and very informal take on the role of logic in general philosophy of science, which is aimed at a broader audience. We defend and advertise the application of logical methods in philosophy of science, starting with the beginnings in the Vienna Circle and ending with some more recent logical developments.
A new justification of probabilism is developed that pays close attention to the structure of the underlying space of possibilities. Its central assumption is that rational numerical degrees of bel...
The difficulties with formalizing the intensional notions necessity, knowability and omniscience, and rational belief are well-known. If these notions are formalized as predicates applying to (codes of) sentences, then from apparently weak and uncontroversial logical principles governing these notions, outright contradictions can be derived. Tense logic is one of the best understood and most extensively developed branches of intensional logic. In tense logic, the temporal notions future and past are formalized as sentential operators rather than as predicates. The question therefore (...) arises whether the notions that are investigated in tense logic can be consistently formalized as predicates. In this paper it is shown that the answer to this question is negative. The logical treatment of the notions of future and past as predicates gives rise to paradoxes due the specific interplay between both notions. For this reason, the tense paradoxes that will be presented are not identical to the paradoxes referred to above. (shrink)
We introduce an epistemic theory of truth according to which the same rational degree of belief is assigned to Tr(. It is shown that if epistemic probability measures are only demanded to be finitely additive (but not necessarily σ-additive), then such a theory is consistent even for object languages that contain their own truth predicate. As the proof of this result indicates, the theory can also be interpreted as deriving from a quantitative version of the Revision Theory of Truth.
. Interpreted dynamical systems are dynamical systems with an additional interpretation mapping by which propositional formulas are assigned to system states. The dynamics of such systems may be described in terms of qualitative laws for which a satisfaction clause is defined. We show that the systems Cand CL of nonmonotonic logic are adequate with respect to the corresponding description of the classes of interpreted ordered and interpreted hierarchical systems, respectively. Inhibition networks, artificial neural networks, logic programs, and evolutionary systems are (...) instances of such interpreted dynamical systems, and thus our results entail that each of them may be described correctly and, in a sense, even completely by qualitative laws that obey the rules of a nonmonotonic logic system. (shrink)
The aim of this paper is to give a certain algebraic account of truth: we want to define what we mean by De Morgan-valued truth models and show their existence even in the case of semantical closure: that is, languages may contain their own truth predicate if they are interpreted by De Morgan-valued models. Before we can prove this result, we have to repeat some basic facts concerning De Morgan-valued models in general, and we will introduce a notion of truth (...) both on the object- and on the metalanguage level appropriate for such models. The definitions and the existence theorem are extensions of Kripke's, Woodruff's, and Visser's concepts and results concerning three- and four-valued truth models. (shrink)
Famously, Frank P. Ramsey suggested a test for the acceptability of conditionals. Recently, David Chalmers and Alan Hájek (2007) have criticized a qualitative variant of the Ramsey test for indicative conditionals. In this paper we argue for the following three claims: (i) Chalmers and Hájek are right that the variant of the Ramsey test that they attack is not the correct way of spelling out an acceptability test for indicative conditionals. But there is a suppositional variant of the Ramsey test (...) which is still stated in purely qualitative terms, which avoids the problems, and which looks correct. (ii) While the variant of the Ramsey test that Chalmers and Hájek criticize is not correct, it is still a good approximation of a correct formulation of the Ramsey test which may be usefully employed in various contexts. (iii) The variant of the Ramsey test that Chalmers and Hájek suggest as a substitute for the deficient version of the Ramsey test is itself subject to worries similar to those raised by Chalmers and Hájek, if it is given a non-suppositional interpretation. (shrink)
Werning applies a theorem by Hodges in order to put forward an argument against Quine's thesis of the indeterminacy of translation and in favour of what Werning calls 'semantic realism'. We show that the argument rests on two critical premises both of which are false. The reasons for these failures are explained and the actual place of this application of Hodges' theorem within Quine's philosophy of language is outlined.
This volume collects contributions that comprise each view point, and incorporates articles by William Bechtel, Jerry Fodor, Jaegwon Kim, Joėlle Proust, and ...