Supervaluational accounts of vagueness have come under assault from Timothy Williamson for failing to provide either a sufficiently classical logic or a disquotational notion of truth, and from Crispin Wright and others for incorporating a notion of higher-order vagueness, via the determinacy operator, which leads to contradiction when combined with intuitively appealing ‘gap principles’. We argue that these criticisms of supervaluation theory depend on giving supertruth an unnecessarily central role in that theory as the sole notion of truth, rather (...) than as one mode of truth. Allowing for the co-existence of supertruth and local truth, we define a notion of local entailment in supervaluation theory, and show that the resulting logic is fully classical and allows for the truth of the gap principles. Finally, we argue that both supertruth and local truth are disquotational, when disquotational principles are properly understood. (shrink)
In this paper, we define some consequence relations based on supervaluation semantics for partial models, and we investigate their properties. For our main consequence relation, we show that natural versions of the following fail: upwards and downwards Lowenheim-Skolem, axiomatizability, and compactness. We also consider an alternate version for supervaluation semantics, and show both axiomatizability and compactness for the resulting consequence relation.
Supervaluational treatments of vagueness are currently quite popular among those who regard vagueness as a thoroughly semantic phenomenon. Peter Unger's 'problem of the many' may be regarded as arising from the vagueness of our ordinary physical-object terms, so it is not surprising that supervaluational solutions to Unger's problem have been offered. I argue that supervaluations do not afford an adequate solution to the problem of the many. Moreover, the considerations I raise against the supervaluational solution tell also against the solution (...) to the problem of the many which is suggested by adherents of the epistemic theory of vagueness. (shrink)
Among other good things, supervaluation is supposed to allow vague sentences to go without truth values. But Jerry Fodor and Ernest Lepore have recently argued that it cannot allow this - not if it also respects certain conceptual truths. The main point I wish to make here is that they are mistaken. Supervaluation can leave truth-value gaps while respecting the conceptual truths they have in mind.
A method of supervaluation for Kripke’s theory of truth is presented. It differs from Kripke’s own method in that it employs trees; results in a compositional semantics; assigns the intuitively correct truth values to the sentences of a particularly tricky example of Gupta’s; and – it is argued – is acceptable as an explication of the correspondence theory of truth.
It’s not clear what supervaluationists should say about propositional content. Does a vague sentence, e.g., ‘Harry is bald’, express one proposition, or a barrage of propositions, or none at all? Or is the matter indeterminate? The supervaluationist canon is not decisive on the issue; authoritative passages can be cited in favor of each of the proposals just mentioned. Furthermore, some detractors have argued that supervaluationism is incapable of providing any coherent account of propositional content. This paper considers each of the (...) proposals for how many propositions are expressed by a vague sentence: none, some, all, or it’s indeterminate. Most of these proposals turn out to be unworkable in the metalanguage. I conclude that orthodox supervaluationists—those who identify truth with supertruth—must either relax the standard requirement that propositions be bivalent, or else alter the standard relation between sentences and propositions in which a sentence inherits its truth-conditions from the proposition it expresses. The best option going forward for the orthodox supervaluationist is perhaps the most surprising—amend the requirement that propositions be bivalent. I argue that propositions having supervaluational truth-conditions are best suited to fill the propositional roles in the semantic theory of a vague language. These propositions admit of truth-value gaps, and gappy propositions are controversial, but I argue that they earn their keep. (shrink)
In a recent paper, Barrio, Pailos and Szmuc show that there are logics that have exactly the validities of classical logic up to arbitrarily high levels of inference. They suggest that a logic therefore must be identified by its valid inferences at every inferential level. However, Scambler shows that there are logics with all the validities of classical logic at every inferential level, but with no antivalidities at any inferential level. Scambler concludes that in order to identify a logic, we (...) at least need to look at the validities and the antivalidities of every inferential level. In this paper, I argue that this is still not enough to identify a logic. I apply BPS’s techniques in a super/sub-valuationist setting to construct a logic that has exactly the validities and antivalidities of classical logic at every inferential level. I argue that the resulting logic is nevertheless distinct from classical logic. (shrink)
Supervaluationism is often described as the most popular semantic treatment of indeterminacy. There???s little consensus, however, about how to fill out the bare-bones idea to include a characterization of logical consequence. The paper explores one methodology for choosing between the logics: pick a logic that norms belief as classical consequence is standardly thought to do. The main focus of the paper considers a variant of standard supervaluational, on which we can characterize degrees of determinacy. It applies the methodology above to (...) focus on degree logic. This is developed first in a basic, single-premise case; and then extended to the multipremise case, and to allow degrees of consequence. The metatheoretic properties of degree logic are set out. On the positive side, the logic is supraclassical???all classical valid sequents are degree logic valid. Strikingly, metarules such as cut and conjunction introduction fail. (shrink)
A B S T R AC T: In this paper I consider an interpretation of future contingents which motivates a unification of a Łukasiewicz-style logic with the more classical supervaluational semantics. This in turn motivates a new non-classical logic modelling what is “made true by history up until now. ” I give a simple Hilbert-style proof theory, and a soundness and completeness argument for the proof theory with respect to the intended models.
Michael Kremer defines fixed-point logics of truth based on Saul Kripke’s fixed point semantics for languages expressing their own truth concepts. Kremer axiomatizes the strong Kleene fixed-point logic of truth and the weak Kleene fixed-point logic of truth, but leaves the axiomatizability question open for the supervaluation fixed-point logic of truth and its variants. We show that the principal supervaluation fixed point logic of truth, when thought of as consequence relation, is highly complex: it is not even analytic. (...) We also consider variants, engendered by a stronger notion of ‘fixed point’, and by variant supervaluation schemes. A ‘logic’ is often thought of, not as a consequence relation, but as a set of sentences – the sentences true on each interpretation. We axiomatize the supervaluation fixed-point logics so conceived. (shrink)
In J Philos Logic 34:155–192, 2005, Leitgeb provides a theory of truth which is based on a theory of semantic dependence. We argue here that the conceptual thrust of this approach provides us with the best way of dealing with semantic paradoxes in a manner that is acceptable to a classical logician. However, in investigating a problem that was raised at the end of J Philos Logic 34:155–192, 2005, we discover that something is missing from Leitgeb’s original definition. Moreover, we (...) show that once the appropriate repairs have been made, the resultant definition is equivalent to a version of the supervaluation definition suggested in J Philos 72:690–716, 1975 and discussed in detail in J Symb Log 51(3):663–681, 1986. The upshot of this is a philosophical justification for the simple supervaluation approach and fresh insight into its workings. (shrink)
I consider two possible sources of vagueness. The first is indeterminacy about which intension is expressed by a word. The second is indeterminacy about which referent (extension) is determined by an intension. Focusing on a Fregean account of intensions, I argue that whichever account is right will matter to whether vagueness turns out to be a representational phenomenon (as opposed to being “in the world”). In addition, it will also matter to whether supervaluationism is a viable semantic framework. Based on (...) these considerations, I end by developing an argument against supervaluational semantics that depends, instead, on anti-Fregean (Millian) assumptions. (shrink)
Kripke’s theory of truth is arguably the most influential approach to self-referential truth and the semantic paradoxes. The use of a partial evaluation scheme is crucial to the theory and the most prominent schemes that are adopted are the strong Kleene and the supervaluation scheme. The strong Kleene scheme is attractive because it ensures the compositionality of the notion of truth. But under the strong Kleene scheme classical tautologies do not, in general, turn out to be true and, as (...) a consequence, classical reasoning is no longer admissible once the notion of truth is involved. The supervaluation scheme adheres to classical reasoning but violates compositionality. Moreover, it turns Kripke’s theory into a rather complicated affair: to check whether a sentence is true we have to look at all admissible precisification of the interpretation of the truth predicate we are presented with. One consequence of this complicated evaluation condition is that under the supervaluation scheme a more proof-theoretic characterization of Kripke’s theory becomes inherently difficult, if not impossible. In this paper we explore the middle ground between the strong Kleene and the supervaluation scheme and provide an evaluation scheme that adheres to classical reasoning but retains many of the attractive features of the strong Kleene scheme. We supplement our semantic investigation with a novel axiomatic theory of truth that matches the semantic theory we have put forth. (shrink)
ABSTRACT Issues concerning the putative perception/cognition divide are not only age-old, but also resurface in contemporary discussions in various forms. In this paper, I connect a relatively new debate concerning perceptual confidence to the perception/cognition divide. The term ‘perceptual confidence’ is quite common in the empirical literature, but there is an unsettled question about it, namely: are confidence assignments perceptual or post-perceptual? John Morrison in two recent papers puts forward the claim that confidence arises already at the level of perception. (...) In this paper, I first argue that Morrison’s case is unconvincing, and then develop one picture on perceptual precision with the notion of ‘matching profile’ and ‘supervaluation’ : 481–495.), highlighting the fact that this is a vagueness account, which is similar to but importantly different from indeterminacy accounts : 156–184.). With this model in hand, there can be rich resources with which to draw a theoretical line between perception and cognition. (shrink)
For the sentences of languages that contain operators that express the concepts of definiteness and indefiniteness, there is an unavoidable tension between a truth-theoretic semantics that delivers truth conditions for those sentences that capture their propositional contents and any model-theoretic semantics that has a story to tell about how indetifiniteness in a constituent affects the semantic value of sentences which imbed it. But semantic theories of both kinds play essential roles, so the tension needs to be resolved. I argue that (...) it is the truth theory which correctly characterises the notion of truth, per se. When we take into account the considerations required to bring model theory into harmony with truth theory, those considerations undermine the arguments standardly used to motivate supervaluational model theories designed to validate classical logic. But those considerations also show that celebration would be premature for advocates of the most frequently encountered rival approach - many-valued model theory. (shrink)
When applying supervaluations to the analysis of a theory, one may encounter the following problem: in supervaluational semantics, contingent statements often have existential presuppositions, and these presuppositions may either contradict the theory or make the application of supervaluations pointless. The most natural way of handling this problem consists in revising the semantics each time a specific theory is considered, and in making the status of the axioms of the theory technically indistinguishable from that of logical truths. Philosophically, this position has (...) important implications: one must either give up any absolute distinction between logical and non-logical truths or allow for a third class of truths besides analytic and factual ones. (shrink)
When applying supervaluations to the analysis of a theory, one may encounter the following problem: in supervaluational semantics, contingent statements often have existential presuppositions, and these presuppositions may either contradict the theory or make the application of supervaluations pointless. The most natural way of handling this problem consists in revising the semantics each time a specific theory is considered, and in making the status of the axioms of the theory technically indistinguishable from that of logical truths. Philosophically, this position has (...) important implications: one must either give up any absolute distinction between logical and non-logical truths or allow for a third class of truths besides analytic and factual ones. (shrink)
In this paper I introduce Horwich’s deflationary theory of truth, called ‘Minimalism’, and I present his proposal of how to cope with the Liar Paradox. The proposal proceeds by restricting the T-schema and, as a consequence of that, it needs a constructive specification of which instances of the T-schema are to be excluded from Minimalism. Horwich has presented, in an informal way, one construction that specifies the Minimalist theory. The main aim of the paper is to present and scrutinize some (...) formal versions of Horwich’s construction. (shrink)
The method of supervaluations offers an elegant procedure by which semantic theory can come to terms with sentences that, for one reason or another, lack truth-value. I argue, however, that this method rests on a fundamental mistake, and so is unsuitable for semantics. The method of supervaluations, I argue, assigns semantic values to sentences based not on the semantic values of their components, but on the values of other, perhaps homophonic, but nevertheless distinct, expressions. That is because supervaluations are generated (...) from classical valuations which necessarily require reinterpreting the component expressions, but the reinterpretation of an expression is tantamount to the introduction of a new expression, or alternatively, to a shift to an entirely new language. To confuse the expression of the language for which a semantic theory is developed with its reinterpreted counterpart, is to commit a fallacy of equivocation. That is the flaw within the method of supervaluations. We see it manifest in a number of examples. (shrink)
Starting with a trustworthy theory T, Galvan (1992) suggests to read off, from the usual hierarchy of theories determined by consistency strength, a finer-grained hierarchy in which theories higher up are capable of ‘explaining’, though not fully justifying, our commitment to theories lower down. One way to ascend Galvan’s ‘hierarchy of explanation’ is to formalize soundness proofs: to this extent it often suffices to assume a full theory of truth for the theory T whose soundness is at stake. In this (...) paper, we investigate the possibility of an extension of this method. Our ultimate goal will be to extend T not only with truth axioms, but with a combination of axioms for predicates for truth and necessity. We first consider two alternative strategies for providing possible worlds semantics for necessity as a predicate, one based on classical logic, the other on a supervaluationist interpretation of necessity. We will then formulate a deductive system of truth and necessity in classical logic that is sound with respect to the given (nonclassical) semantics. (shrink)
The first section (§1) of this essay defends reliance on truth values against those who, on nominalistic grounds, would uniformly substitute a truth predicate. I rehearse some practical, Carnapian advantages of working with truth values in logic. In the second section (§2), after introducing the key idea of auxiliary parameters (§2.1), I look at several cases in which logics involve, as part of their semantics, an extra auxiliary parameter to which truth is relativized, a parameter that caters to special kinds (...) of sentences. In many cases, this facility is said to produce truth values for sentences that on the face of it seem neither true nor false. Often enough, in this situation appeal is made to the method of supervaluations, which operate by “quantifying out” auxiliary parameters, and thereby produce something like a truth value. Logics of this kind exhibit striking differences. I first consider the role that Tarski gives to supervaluation in first order logic (§2.2), and then, after an interlude that asks whether neither-true-nor-false is itself a truth value (§2.3), I consider sentences with non-denoting terms (§2.4), vague sentences (§2.5), ambiguous sentences (§2.6), paradoxical sentences (§2.7), and future-tensed sentences in indeterministic tense logic (§2.8). I conclude my survey with a look at alethic modal logic considered as a cousin (§2.9), and finish with a few sentences of “advice to supervaluationists” (2.10), advice that is largely negative. The case for supervaluations as a road to truth is strong only when the auxiliary parameter that is “quantified out” is in fact irrelevant to the sentences of interest—as in Tarski’s definition of truth for classical logic. In all other cases, the best policy when reporting the results of supervaluation is to use only explicit phrases such as “settled true” or “determinately true,” never dropping the qualification. (shrink)
The partial structures approach has two major components: a broad notion of structure (partial structure) and a weak notion of truth (quasi-truth). In this paper, we discuss the relationship between this approach and free logic. We also compare the model-theoretic analysis supplied by partial structures with the method of supervaluations, which was initially introduced as a technique to provide a semantic analysis of free logic. We then combine the three formal frameworks (partial structures, free logic and supervaluations), and apply the (...) resulting approach to accommodate semantic paradoxes. (shrink)
The original version of the article unfortunately contained a mistake. In the Acknowledgments section of the original version of the article, the grant number of the Marie Sklodowska-Curie Individual Fellowship supporting the author’s work was misstated.
Este artículo se centra en un argumento presentado por Fara (2010) en contra del supervaluacionismo en el contexto de la vaguedad. Muestro cómo dicho argumento es igualmente aplicable al supervaluacionismo de tiempo ramificado (presentado por primera vez por Thomason 1970), pero no a la semántica 'STRL' de Malpass y Wawer (2012), que está estrechamente relacionada.
It is widely assumed that the methods and results of science have no place among the data to which our semantics of vague predicates must answer. This despite the fact that it is well known that such prototypical vague predicates as ‘is bald’ play a central role in scientific research (e.g. the research that established Rogaine as a treatment for baldness). I argue here that the assumption is false and costly: in particular, I argue one cannot accept either supervaluationist semantics, (...) or the criticism of that semantics offered by Fodor and Lepore, without having to abandon accepted, and unexceptionable, scientific methodology. (shrink)
The mass/count distinction attracts a lot of attention among cognitive scientists, possibly because it involves in fundamental ways the relation between language (i.e. grammar), thought (i.e. extralinguistic conceptual systems) and reality (i.e. the physical world). In the present paper, I explore the view that the mass/count distinction is a matter of vagueness. While every noun/concept may in a sense be vague, mass nouns/concepts are vague in a way that systematically impairs their use in counting. This idea has never been systematically (...) pursued, to the best of my knowledge. I make it precise relying on supervaluations (more specifically, ‘data semantics’) to model it. I identify a number of universals pertaining to how the mass/count contrast is encoded in the languages of the world, along with some of the major dimensions along which languages may vary on this score. I argue that the vagueness based model developed here provides a useful perspective on both. The outcome (besides shedding light on semantic variation) seems to suggest that vagueness is not just an interface phenomenon that arises in the interaction of Universal Grammar (UG) with the Conceptual/Intentional System (to adopt Chomsky’s terminology), but it is actually part of the architecture of UG. (shrink)
Confused terms appear to signify more than one entity. Carnap maintained that any putative name that is associated with more than one object in a relevant universe of discourse fails to be a genuine name. Although many philosophers have agreed with Carnap, they have not always agreed among themselves about the truth-values of atomic sentences containing such terms. Some hold that such atomic sentences are always false, and others claim they are always truth-valueless. Field maintained that confused terms can still (...) refer, albeit partially, and offered a supervaluational account of their semantic properties on which some atomic sentences with confused terms can be true. After outlining many of the most important theoretical considerations for and against various semantic theories for such terms, we report the results of a study designed to investigate which of these accounts best accords with the truth-value judgments of ordinary language users about sentences containing these terms. We found that naïve participants view confused names as capable of successfully referring to one or more objects. Thus, semantic theories that judge them to involve total reference failure do not comport well with patterns of ordinary usage. (shrink)
The logic of singular terms that refer to nothing, such as ‘Santa Claus,’ has been studied extensively under the heading of free logic. The present essay examines expressions whose reference is defective in a different way: they signify more than one entity. The bulk of the effort aims to develop an acceptable formal semantics based upon an intuitive idea introduced informally by Hartry Field and discussed by Joseph Camp; the basic strategy is to use supervaluations. This idea, as it stands, (...) encounters difficulties, but with suitable refinements it can be salvaged. Two other options for a formal semantics of multiply signifying terms are also presented, and I discuss the relative merits of the three semantics briefly. Finally, possible modifications to the standard logical regimentation of the notion of existence are considered. (shrink)
http://dx.doi.org/10.5007/1808-1711.2012v16n2p341 Current supervaluation models of opinion, notably van Fraassen’s (1984; 1989; 1990; 1998; 2005; 2006) use of intervals to characterize vague opinion, capture nuances of ordinary reflection which are overlooked by classic measure theoretic models of subjective probability. However, after briefly explaining van Fraassen’s approach, we present two limitations in his current framework which provide clear empirical reasons for seeking a refinement. Any empirically adequate account of our actual judgments must reckon with the fact that these are typically neither (...) uniform through the range of outcomes we take to be serious possibilities nor abrupt at the edges. (shrink)
The chapter considers two semantic issues concerning will-sentences: Stalnaker’s Asymmetry and modal subordination in Karttunen-type discourses. The former points to a distinction between will and modal verbs, seeming to show that will does not license non-specific indefinites. The latter, conversely, suggests that will-sentences involve some kind of modality. To account for the data, the chapter proposes that will is semantically a tense, hence it doesn’t contribute a quantifier over modal alternatives; a modal feature, however, is introduced in the interpretation of (...) a will-sentence through a supervaluational strategy universally quantifying over possible futures. That this is not part of will’s lexical semantics is shown to have consequences that ultimately contribute to explain Stalnaker’s Asymmetry. Furthermore, that a modal quantification is present in the interpretation of a will-sentence is shown to imply the availability of modal subordination in Karttunen-type discourses. (shrink)
This paper asks which free logic a Fregean should adopt. It examines options within the tradition including Carnap’s (1956) chosen object theory, Lehmann’s (1994, 2002) strict Fregean free logic, Woodruff’s (1970) strong table about Boolean operators and Bencivenga’s (1986, 1991) supervaluational semantics. It argues for a neutral free logic in view of its proximity towards explaining natural languages. However, disagreeing with Lehmann, it claims a Fregean should adopt the strong table based on Frege’s discussion on generality. Supervaluation uses strong (...) table and aims to give it a semantic justification. However, supervaluation is in turn justified by convention or mental experiments, which Lehmann argues as inadequate. The paper proposes a new justification of supervaluation based on sense and two-dimensional semantics. The resulting model, coined Supervaluational Neutral Free Logic (SNFL), resolves many conflicts between Lehmann and Bencivenga while staying close with Frege’s discussions about non-denotation. It also provides new insights into the relations among truth, logical truth, and supervaluated truth (or supertruth, for short). (shrink)
Supervaluational theories of vagueness have achieved considerable popularity in the past decades, as seen in eg [5], [12]. This popularity is only natural; supervaluations let us retain much of the power and simplicity of classical logic, while avoiding the commitment to strict bivalence that strikes many as implausible. Like many nonclassical logics, the supervaluationist system SP has a natural dual, the subvaluationist system SB, explored in eg [6], [28].1 As is usual for such dual systems, the classical features of SP (...) (typically viewed as benefits) appear in SB in ‘mirror-image’ form, and the nonclassical features of SP (typically viewed as costs) also appear in SB in ‘mirror-image’ form. Given this circumstance, it can be difficult to decide which of two dual systems is better suited for an approach to vagueness.2 The present paper starts from a consideration of these two approaches— the supervaluational and the subvaluational—and argues that neither of them is well-positioned to give a sensible logic for vague language. §2 presents the systems SP and SB and argues against their usefulness. Even if we suppose that the general picture of vague language they are often taken to embody is accurate, we ought not arrive at systems like SP and SB. Instead, such a picture should lead us to truth-functional systems like strong Kleene logic (K3) or its dual LP. §3 presents these systems, and argues that supervaluationist and subvaluationist understandings of language are better captured there; in particular, that a dialetheic approach to vagueness based on the logic LP is a more sensible approach. §4 goes on to consider the phenomenon of higher-order vagueness within an LP-based approach, and §5 closes with a consideration of the sorites argument itself. (shrink)