Plausibly, when we adopt a probabilistic standpoint any measure Cb of the degree to which evidence e confirms hypothesis h relative to background knowledge b should meet these five desiderata: Cb > 0 when P > P < 0 when P < P; Cb = 0 when P = P. Cb is some function of the values P and P assume on the at most sixteen truth-functional combinations of e and h. If P < P and P = P then (...) Cb ≤ Cb; if P = P and P < P then Cb ≥ Cb. Cb – Cb is fully determined by Cb and Cbe – Cbe; if Cb = 0 then Cb + Cbe = 0. If P = P then Cb = Cb. (shrink)
First paragraph: Truthmaker theory maintains that for every truth there is something, some thing, some entity, that makes it true. Balking at the prospect that logical truths are made true by any particular thing, a consequence that may in fact be hard to avoid (see Restall 1996, Read 2000), this principle of truthmaking is sometimes restricted to (logically) contingent truths. I aim to show that even in its restricted form, the principle is provably false.
This article begins by outlining some of the history—beginning with brief remarks of Quine's—of work on conditional assertions and conditional events. The upshot of the historical narrative is that diverse works from various starting points have circled around a nexus of ideas without convincingly tying them together. Section 3 shows how ideas contained in a neglected article of de Finetti's lead to a unified treatment of the topics based on the identification of conditional events as the objects of conditional bets. (...) The penultimate section explores some of the consequences of the resulting logic of conditional events while the last defends it. (shrink)
In making assertions one takes on commitments to the consistency of what one asserts and to the logical consequences of what one asserts. Although there is no quick link between belief and assertion, the dialectical requirements on assertion feed back into normative constraints on those beliefs that constitute one's evidence. But if we are not certain of many of our beliefs and that uncertainty is modelled in terms of probabilities, then there is at least prima facie incoherence between the normative (...) constraints on belief and the probability-like structure of degrees of belief. I suggest that the norm-governed practice relating to degrees of belief is the evaluation of betting odds. (shrink)
According to the axiologist the value concepts are basic and the deontic concepts are derivative. This paper addresses two fundamental problems that arise for the axiologist. Firstly, what ought the axiologist o understand by the value of an act? Second, what are the prospects in principle for an axiological representation of moral theories. Can the deontic concepts of any coherent moral theory be represented by an agent-netural axiology: (1) whatever structure those concepts have and (2) whatever the causal structure of (...) the world happens to be. We show that the answer is "almost always". The only substantive constraint is that autonomous moral agents cannot have the power to simultaneously block the options open to other autonomous moral agents. But this seems to be part and parcel of the notion of an autonomous moral agent. (shrink)
A proof employing no semantic terms is offered in support of the claim that there can be truths without truthmakers. The logical resources used in the proof are weak but do include the structural rule Contraction.
The thesis that, in a system of natural deduction, the meaning of a logical constant is given by some or all of its introduction and elimination rules has been developed recently in the work of Dummett, Prawitz, Tennant, and others, by the addition of harmony constraints. Introduction and elimination rules for a logical constant must be in harmony. By deploying harmony constraints, these authors have arrived at logics no stronger than intuitionist propositional logic. Classical logic, they maintain, cannot be justified (...) from this proof-theoretic perspective. This paper argues that, while classical logic can be formulated so as to satisfy a number of harmony constraints, the meanings of the standard logical constants cannot all be given by their introduction and/or elimination rules; negation, in particular, comes under close scrutiny. (shrink)
Proofs of Gödel's First Incompleteness Theorem are often accompanied by claims such as that the gödel sentence constructed in the course of the proof says of itself that it is unprovable and that it is true. The validity of such claims depends closely on how the sentence is constructed. Only by tightly constraining the means of construction can one obtain gödel sentences of which it is correct, without further ado, to say that they say of themselves that they are unprovable (...) and that they are true; otherwise a false theory can yield false gödel sentences. (shrink)
Various natural deduction formulations of classical, minimal, intuitionist, and intermediate propositional and first-order logics are presented and investigated with respect to satisfaction of the separation and subformula properties. The technique employed is, for the most part, semantic, based on general versions of the Lindenbaum and Lindenbaum–Henkin constructions. Careful attention is paid to which properties of theories result in the presence of which rules of inference, and to restrictions on the sets of formulas to which the rules may be employed, restrictions (...) determined by the formulas occurring as premises and conclusion of the invalid inference for which a counterexample is to be constructed. We obtain an elegant formulation of classical propositional logic with the subformula property and a singularly inelegant formulation of classical first-order logic with the subformula property, the latter, unfortunately, not a product of the strategy otherwise used throughout the article. Along the way, we arrive at an optimal strengthening of the subformula results for classical first-order logic obtained as consequences of normalization theorems by Dag Prawitz and Gunnar Stålmarck. (shrink)
Starting from John MacFarlane's recent survey of answers to the question ‘What is assertion?’, I defend an account of assertion that draws on elements of MacFarlane's and Robert Brandom's commitment accounts, Timothy Williamson's knowledge norm account, and my own previous work on the normative status of logic. I defend the knowledge norm from recent attacks. Indicative conditionals, however, pose a problem when read along the lines of Ernest Adams' account, an account supported by much work in the psychology of reasoning. (...) Furthermore, there seems to be no place for degrees of belief in the accounts of belief and assertion given here. Degrees of belief do have a role in decision‐making, but, again, there is much evidence that the orthodox theory of subjective utility maximization is not a good description of what we do in decision‐making and, arguably, neither is it a good normative guide to how we ought to make decisions. (shrink)
While there is now considerable experimental evidence that, on the one hand, participants assign to the indicative conditional as probability the conditional probability of consequent given antecedent and, on the other, they assign to the indicative conditional the ?defective truth-table? in which a conditional with false antecedent is deemed neither true nor false, these findings do not in themselves establish which multi-premise inferences involving conditionals participants endorse. A natural extension of the truth-table semantics pronounces as valid numerous inference patterns that (...) do seem to be part of ordinary usage. However, coupled with something the probability account gives us?namely that when conditional-free ? entails conditional-free ?, ?if ? then ?? is a trivial, uninformative truth?we have enough logic to derive the paradoxes of material implication. It thus becomes a matter of some urgency to determine which inference patterns involving indicative conditionals participants do endorse. Only thus will we be able to arrive at a realistic, systematic semantics for the indicative conditional. (shrink)
As Wilfrid Hodges has observed, there is no mention of the notion truth-in-a-model in Tarski's article 'The Concept of Truth in Formalized Languages'; nor does truth make many appearances in his papers on model theory from the early 1950s. In later papers from the same decade, however, this reticence is cast aside. Why should Tarski, who defined truth for formalized languages and pretty much founded model theory, have been so reluctant to speak of truth in a model? What might explain (...) the change in his practice? The answers, I believe, lie in Tarski's views on truth simpliciter. (shrink)
This article begins by exploring a lost topic in the philosophy of science:the properties of the relations evidence confirming h confirmsh'' and, more generally, evidence confirming each ofh1, h2, ..., hm confirms at least one of h1, h2,ldots;, hn''.The Bayesian understanding of confirmation as positive evidential relevanceis employed throughout. The resulting formal system is, to say the least, oddlybehaved. Some aspects of this odd behaviour the system has in common withsome of the non-classical logics developed in the twentieth century. Oneaspect (...) – its ``parasitism'''' on classical logic – it does not, and it is this featurethat makes the system an interesting focus for discussion of questions in thephilosophy of logic. We gain some purchase on an answer to a recently prominentquestion, namely, what is a logical system? More exactly, we ask whether satisfaction of formal constraints is sufficient for a relation to be considered a (logical) consequence relation. The question whether confirmation transfer yields a logical system is answered in the negative, despite confirmation transfer having the standard properties of a consequence relation, on the grounds that validity of sequents in the system is not determined by the meanings of the connectives that occur in formulas. Developing the system in a different direction, we find it bears on the project of ``proof-theoretic semantics'''': conferring meaning on connectives by means of introduction (and possibly elimination) rules is not an autonomous activity, rather it presupposes a prior, non-formal,notion of consequence. Some historical ramifications are alsoaddressed briefly. (shrink)
Consistent application of coherece arguments shows that fair betting quotients are subject to constraints that are too stringent to allow their identification with either degrees of belief or probabilities. The pivotal role of fair betting quotients in the Dutch Book Argument, which is said to demonstrate that a rational agent's degrees of belief are probabilities, is thus undermined from both sides.
From the point of view of proof-theoretic semantics, we examine the logical background invoked by Neil Tennant's abstractionist realist account of mathematical existence. To prepare the way, we must first look closely at the rule of existential elimination familiar from classical and intuitionist logics and at rules governing identity. We then examine how well free logics meet the harmony and uniqueness constraints familiar from the proof-theoretic semantics project. Tennant assigns a special role to atomic formulas containing singular terms. This, we (...) find, secures harmony and uniqueness but militates against the putative realism. (shrink)
Of his numerous investigations ... Tarski was most proud of two: his work on truth and his design of an algorithm in 1930 to decide the truth or falsity of any sentence of the elementary theory of the high school Euclidean geometry. [...] His mathematical treatment of the semantics of languages and the concept of truth has had revolutionary consequences for mathematics, linguistics, and philosophy, and Tarski is widely thought of as the man who "defined truth". The seeming simplicity of (...) his famous example that the sentence "Snow is white" is true just in case snow is white belies the depth and complexity of the consequences which can be drawn from the possibility of giving a general treatment of the concept of truth in formal mathematical languages in a rigorous mathematical way. (J.W. Addison). (shrink)
Intervals in boolean algebras enter into the study of conditional assertions (or events) in two ways: directly, either from intuitive arguments or from Goodman, Nguyen and Walker's representation theorem, as suitable mathematical entities to bear conditional probabilities, or indirectly, via a representation theorem for the family of algebras associated with de Finetti's three-valued logic of conditional assertions/events. Further representation theorems forge a connection with rough sets. The representation theorems and an equivalent of the boolean prime ideal theorem yield an algebraic (...) completeness theorem for the three-valued logic. This in turn leads to a Henkin-style completeness theorem. Adequacy with respect to a family of Kripke models for de Finetti's logic, Łukasiewicz's three-valued logic and Priest's Logic of Paradox is demonstrated. The extension to first-order yields a short proof of adequacy for Körner's logic of inexact predicates. (shrink)
The majority of formal accounts attribute to Stoic logicians the classical truth-functional understanding of the material conditional and exclusive disjunction.These interpretations were disputed,...
From Introduction: In a 1968 article, ‘Probability Measures of Fuzzy Events’, Lotfi Zadeh pro-posed accounts of absolute and conditional probability for fuzzy sets (Zadeh, 1968).
There are distinctive methodological and conceptual challenges in rare and severe event (RSE) forecast verification, that is, in the assessment of the quality of forecasts of rare but severe natural hazards such as avalanches, landslides or tornadoes. While some of these challenges have been discussed since the inception of the discipline in the 1880s, there is no consensus about how to assess RSE forecasts. This article offers a comprehensive and critical overview of the many different measures used to capture the (...) quality of categorical, binary RSE forecasts – forecasts of occurrence and non-occurrence – and argues that of skill scores in the literature there is only one adequate for RSE forecasting. We do so by first focusing on the relationship between accuracy and skill and showing why skill is more important than accuracy in the case of RSE forecast verification. We then motivate three adequacy constraints for a measure of skill in RSE forecasting. We argue that of skill scores in the literature only the Peirce skill score meets all three constraints. We then outline how our theoretical investigation has important practical implications for avalanche forecasting, basing our discussion on a study in avalanche forecast verification using the nearest-neighbour method (Heierli et al., 2004). Lastly, we raise what we call the “scope challenge”; this affects all forms of RSE forecasting and highlights how and why working with the right measure of skill is important not only for local binary RSE forecasts but also for the assessment of different diagnostic tests widely used in avalanche risk management and related operations, including the design of methods to assess the quality of regional multi-categorical avalanche forecasts. (shrink)
Taking as starting point two familiar interpretations of probability, we develop these in a perhaps unfamiliar way to arrive ultimately at an improbable claim concerning the proper axiomatization of probability theory: the domain of definition of a point-valued probability distribution is an orthomodular partially ordered set. Similar claims have been made in the light of quantum mechanics but here the motivation is intrinsically probabilistic. This being so the main task is to investigate what light, if any, this sheds on quantum (...) mechanics. In particular it is important to know under what conditions these point-valued distributions can be thought of as derived from distribution-pairs of upper and lower probabilities on boolean algebras. Generalising known results this investigation unsurprisingly proves unrewarding. In the light of this failure the next topic investigated is how these generalized probability distributions are to be interpreted. (shrink)
This paper responds to Rancière’s reading of Lyotard’s analysis of the sublime by attempting to articulate what Lyotard would call a “differend” between the two. Sketching out Rancière’s criticisms, I show that Lyotard’s analysis of the Kantian sublime is more defensible than Rancière claims. I then provide an alternative reading, one that frees Lyotard’s sublime from Rancière’s central accusation that it signals nothing more than the mind’s perpetual enslavement to the law of the Other. Reading the sublime through the figure (...) of the “event,” I end by suggesting that it may even have certain affinities with what Rancière calls “politics.”. (shrink)
Some propositions add more information to bodies of propositions than do others. We start with intuitive considerations on qualitative comparisons of information added . Central to these are considerations bearing on conjunctions and on negations. We find that we can discern two distinct, incompatible, notions of information added. From the comparative notions we pass to quantitative measurement of information added. In this we borrow heavily from the literature on quantitative representations of qualitative, comparative conditional probability. We look at two ways (...) to obtain a quantitative conception of information added. One, the most direct, mirrors Bernard Koopman’s construction of conditional probability: by making a strong structural assumption, it leads to a measure that is, transparently, some function of a function P which is, formally, an assignment of conditional probability (in fact, a Popper function). P reverses the information added order and mislocates the natural zero of the scale so some transformation of this scale is needed but the derivation of P falls out so readily that no particular transformation suggests itself. The Cox–Good–Aczél method assumes the existence of a quantitative measure matching the qualitative relation, and builds on the structural constraints to obtain a measure of information that can be rescaled as, formally, an assignment of conditional probability. A classical result of Cantor’s, subsequently strengthened by Debreu, goes some way towards justifying the assumption of the existence of a quantitative scale. What the two approaches give us is a pointer towards a novel interpretation of probability as a rescaling of a measure of information added. (shrink)
Our starting point is Michael Luntley's falsificationist semantics for the logical connectives and quantifiers: the details of his account are criticised but we provide an alternative falsificationist semantics that yields intuitionist logic, as Luntley surmises such a semantics ought. Next an account of the logical connectives and quantifiers that combines verificationist and falsificationist perspectives is proposed and evaluated. While the logic is again intuitionist there is, somewhat surprisingly, an unavoidable asymmetry between the verification and falsification conditions for negation, the conditional, (...) and the universal quantifier. Lastly we are lead to a novel characterization of realism. (shrink)
George schlesinger has characterized justified belief probabilistically. I question the propriety of this characterization and demonstrate that with respect to it certain principles of epistemic logic that he considers plausible are unsound.
This takes a little-known reading of Kafka’s “In the Penal Colony” by Lyotard as the starting point for an examination of the relation between body and law. Lyotard’s late notion of the intractable serves as a frame for this examination: explicitly claimed to be an absolute condition of morals, I argue it also has political implications, which are here drawn out through the link between the intractable and the body. In Lyotard’s later writings, the body is usually associated with an (...) originary affectivity, which is sometimes equated with sexual difference but sometimes appears to come “before” and exceed this law of bodily differences. It is the latter case, I argue, that allows for a path to be opened beyond the bodily violence of the law to be found in Kafka, especially if this is framed in terms of a certain “politics of incommensurability.”. (shrink)
Uncertainty and vagueness/imprecision are not the same: one can be certain about events described using vague predicates and about imprecisely specified events, just as one can be uncertain about precisely specified events. Exactly because of this, a question arises about how one ought to assign probabilities to imprecisely specified events in the case when no possible available evidence will eradicate the imprecision (because, say, of the limits of accuracy of a measuring device). Modelling imprecision by rough sets over an approximation (...) space presents an especially tractable case to help get one’s bearings. Two solutions present themselves: the first takes as upper and lower probabilities of the event X the (exact) probabilities assigned X ’s upper and lower rough-set approximations; the second, motivated both by formal considerations and by a simple betting argument, is to treat X ’s rough-set approximation as a conditional event and assign to it a point-valued (conditional) probability. (shrink)