What is the source of logical and mathematical truth? This book revitalizes conventionalism as an answer to this question. Conventionalism takes logical and mathematical truth to have their source in linguistic conventions. This was an extremely popular view in the early 20th century, but it was never worked out in detail and is now almost universally rejected in mainstream philosophical circles. Shadows of Syntax is the first book-length treatment and defense of a combined conventionalist theory of logic and mathematics. It (...) argues that our conventions, in the form of syntactic rules of language use, are perfectly suited to explain the truth, necessity, and a priority of logical and mathematical claims, as well as our logical and mathematical knowledge. (shrink)
Here I defend dispositionalism about meaning and rule-following from Kripkenstein's infamous anti-dispositionalist arguments. The problems of finitude, error, and normativity are all addressed. The general lesson I draw is that Kripkenstein's arguments trade on an overly simplistic version of dispositionalism.
This paper formulates a general epistemological argument against what I call non-causal realism, generalizing domain specific arguments by Benacerraf, Field, and others. First I lay out the background to the argument, making a number of distinctions that are sometimes missed in discussions of epistemological arguments against realism. Then I define the target of the argument—non-causal realism—and argue that any non-causal realist theory, no matter the subject matter, cannot be given a reasonable epistemology and so should be rejected. Finally I discuss (...) and respond to several possible responses to the argument. In addition to clearing up and avoiding numerous misunderstandings of arguments of this kind that are quite common in the literature, this paper aims to present and endorse a rigorous and fully general epistemological argument against realism. (shrink)
This paper investigates the determinacy of mathematics. We begin by clarifying how we are understanding the notion of determinacy before turning to the questions of whether and how famous independence results bear on issues of determinacy in mathematics. From there, we pose a metasemantic challenge for those who believe that mathematical language is determinate, motivate two important constraints on attempts to meet our challenge, and then use these constraints to develop an argument against determinacy and discuss a particularly popular approach (...) to resolving indeterminacy, before offering some brief closing reflections. We believe our discussion poses a serious challenge for most philosophical theories of mathematics, since it puts considerable pressure on all views that accept a non-trivial amount of determinacy for even basic arithmetic. (shrink)
Inferences are familiar movements of thought, but despite important recent work on the topic, we do not yet have a fully satisfying theory of inference. Here I provide a functionalist theory of inference. I argue that the functionalist framework allows us the flexibility to meet various demands on a theory of inference that have been proposed (such as that it must explain inferential Moorean phenomena and epistemological ‘taking’). While also allowing us to compare, contrast, adapt, and combine features of extant (...) theories of inference into one unified theory. In fleshing out the inference role, I also criticize the common assumption that inference requires rule-following. (shrink)
In the work of both Matti Eklund and John Hawthorne there is an influential semantic argument for a maximally expansive ontology that is thought to undermine even modest forms of quantifier variance. The crucial premise of the argument holds that it is impossible for an ontologically "smaller" language to give a Tarskian semantics for an ontologically "bigger" language. After explaining the Eklund-Hawthorne argument (in section I), we show this crucial premise to be mistaken (in section II) by developing a Tarskian (...) semantics for a mereological universalist language within a mereological nihilist language (a case which we, and Eklund and Hawthorne, take as representative). After developing this semantics we step back (in section III) to discuss the philosophical motivations behind the Eklund- Hawthorne argument’s demand for a semantics. We ultimately conclude that quantifier variantists can meet any demand for a semantics that might reasonably be imposed upon them. (shrink)
Recently a number of works in meta-ontology have used a variant of J.H. Harris's collapse argument in the philosophy of logic as an argument against Eli Hirsch's quantifier variance. There have been several responses to the argument in the literature, but none of them have identified the central failing of the argument, viz., the argument has two readings: one on which it is sound but doesn't refute quantifier variance and another on which it is unsound. The central lesson I draw (...) is that arguments against quantifier variance must pay strict attention to issues of translation and interpretation. The paper also has a substantial appendix in which I prove the equivalence of plural mereological nihilism and standard first-order atomistic mereology; results of this kind are often appealed to in the literature on quantifier variance but without many details on the nature or proof of the result. (shrink)
Several recent epistemologists have used understanding-assent links in theories of a priori knowledge and justification, but Williamson influentially argued against the existence of such links. Here I (1) clarify the nature of understanding-assent links and their role in epistemology; (2) clarify and clearly formulate Williamson’s arguments against their existence; (3) argue that Williamson has failed to successfully establish his conclusion; and (4) rebut Williamson’s claim that accepting understanding-assent links amounts to a form of dogmatism.
In “Truth by Convention” W.V. Quine gave an influential argument against logical conventionalism. Even today his argument is often taken to decisively refute logical conventionalism. Here I break Quine’s arguments into two— the super-task argument and the regress argument—and argue that while these arguments together refute implausible explicit versions of conventionalism, they cannot be successfully mounted against a more plausible implicit version of conventionalism. Unlike some of his modern followers, Quine himself recognized this, but argued that implicit conventionalism was explanatorily (...) idle. Against this I show that pace Quine’s claim that implicit conventionalism has no content beyond the claim that logic is firmly accepted, implicit rules of inference can be used to distinguish the firmly accepted from the conventional. As part of my case, I argue that positing syntactic rules of inference as part of our linguistic competence follows from the same methodology that leads contemporary linguists and cognitive scientists to posit rules of phonology, morphology, and grammar. The upshot of my discussion is a diagnosis of the fallacy in Quine’s master critique of logical conventionalism and a re-opening of possibilities for an attractive conventionalist theory of logic. (shrink)
This essay clarifies quantifier variance and uses it to provide a theory of indefinite extensibility that I call the variance theory of indefinite extensibility. The indefinite extensibility response to the set-theoretic paradoxes sees each argument for paradox as a demonstration that we have come to a different and more expansive understanding of ‘all sets’. But indefinite extensibility is philosophically puzzling: extant accounts are either metasemantically suspect in requiring mysterious mechanisms of domain expansion, or metaphysically suspect in requiring nonstandard assumptions about (...) mathematical objects. Happily, the view of quantifier meanings that underwrites quantifier variance can be used to provide an account of indefinite extensibility that is both metasemantically and metaphysically satisfying. Section 1 introduces the puzzle of indefinite extensibility; section 2 develops and clarifies the metasemantics of quantifier variance; section 3 solves section 1's puzzle of indefinite extensibility by applying section 2's account of quantifier meanings; and section 4 compares the theory developed in section 3 to several other theories in the literature. (shrink)
Unrestricted inferentialism holds both that any collection of inference rules can determine a meaning for an expression and meaning constituting rules are automatically valid. Prior's infamous tonk connective refuted unrestricted inferentialism, or so it is universally thought. This paper argues against this consensus. I start by formulating the metasemantic theses of inferentialism with more care than they have hitherto received; I then consider a tonk language — Tonklish — and argue that the unrestricted inferentialist's treatment of this language is only (...) problematic if it is mistakenly assumed that Tonklish can be homophonically translated into English. Next, I discuss the proper, non-homophonic, translation of Tonklish into English, rebut various objections, and consider several variants of Tonklish. The paper closes by highlighting the philosophical advantages that unrestricted inferentialism has over its competitors once the terrors of tonk have been tamed. (shrink)
Some philosophers have argued that putative logical disagreements aren't really disagreements at all since when you change your logic you thereby change the meanings of your logical constants. According to this picture classical logicians and intuitionists don't really disagree, they just mean different things by terms like “not” and “or”. Quine gave an infamous “translation argument” for this view. Here I clarify the change of logic, change of meaning (CLCM) thesis, examine and find fault with Quine's translation argument for the (...) thesis, offer a modified translation argument in its stead, defend my modified argument from a crucial objection, discuss where the CLCM thesis leaves logical disputes, and discuss if and how the thesis coheres with Quine's influential view of logic. (shrink)
Our relationship to the infinite is controversial. But it is widely agreed that our powers of reasoning are finite. I disagree with this consensus; I think that we can, and perhaps do, engage in infinite reasoning. Many think it is just obvious that we can't reason infinitely. This is mistaken. Infinite reasoning does not require constructing infinitely long proofs, nor would it gift us with non-recursive mental powers. To reason infinitely we only need an ability to perform infinite inferences. I (...) argue that we have this ability. My argument looks to our best current theories of inference and considers examples of apparent infinite reasoning. My position is controversial, but if I'm right, our theories of truth, mathematics, and beyond could be transformed. And even if I'm wrong, a more careful consideration of infinite reasoning can only deepen our understanding of thinking and reasoning. -/- (Note for readers: the paper's brief discussion of uniform reflection and omega inconsistency is misleading. The imagined interlocutor's argument makes an assumption about the PA-provability of provability generalizations that, while true for the Godel sentence's instances, is unjustified, in general. This means my position is stronger against this objection than the paper suggests, since omega inconsistent theories are not automatically inconsistent with their uniform reflection principles, you also need to assume the arithmetically true Pi-2 sentences.). (shrink)
An influential argument against the possibility of truth by linguistic convention holds that while conventions can determine which proposition a given sentence expresses, they (conventions) are powerless to make propositions true or false. This argument has been offered in the literature by Lewy, Yablo, Boghossian, Sider and others. But despite its influence and prima facie plausibility, the argument: (i) equivocates between different senses of “making true”; (ii) mistakenly assumes hyperintensional contexts are intensional; and (iii) relies upon an implausible vision of (...) the way that language works. (shrink)
In the mid twentieth century, logical positivists and many other philosophers endorsed a simple equation: something was necessary just in case it was analytic just in case it was a priori. Kripke’s examples of a posteriori necessary truths showed that the simple equation is false. But while positivist-style inferentialist approaches to logic and mathematics remain popular, there is no inferentialist account of necessity a posteriori. I give such an account. This sounds like an anti-Kripkean project, but it is not. Some (...) of Kripke’s remarks even suggest this kind of approach. This inferentialist approach reinstates neither the simple equation nor pure conventionalism about necessity a posteriori. But it does lead to something near enough, a type of impure conventionalism. In recent years, metaphysically heavyweight approaches to modality have been popular, while other approaches have lagged behind. The inferentialist, impure conventionalist theory of necessity I describe aims to provide a metaphysically lightweight option in modal metaphysics. (shrink)
Theodore Sider’s recent book, “Writing the Book of the World”, employs a primitive notion of metaphysical structure in order to make sense of substantive metaphysics. But Sider and others who employ metaphysical primitives face serious epistemological challenges. In the first section I develop a specific form of this challenge for Sider’s own proposed epistemology for structure; the second section develops a general reliability challenge for Sider’s theory; and the third and final section argues for the rejection of Siderean structure in (...) the course of answering a transcendental argument against such rejection. (shrink)
Conventionalism about mathematics claims that mathematical truths are true by linguistic convention. This is often spelled out by appealing to facts concerning rules of inference and formal systems, but this leads to a problem: since the incompleteness theorems we’ve known that syntactic notions can be expressed using arithmetical sentences. There is serious prima facie tension here: how can mathematics be a matter of convention and syntax a matter of fact given the arithmetization of syntax? This challenge has been pressed in (...) the literature by Hilary Putnam and Peter Koellner. In this paper I sketch a conventionalist theory of mathematics, show that this conventionalist theory can meet the challenge just raised , and clarify the type of mathematical pluralism endorsed by the conventionalist by introducing the notion of a semantic counterpart. The paper’s aim is an improved understanding of conventionalism, pluralism, and the relationship between them. (shrink)
Quantifier variance is a well-known view in contemporary metaontology, but it remains very widely misunderstood by critics. Here we briefly and clearly explain the metasemantics of quantifier variance and distinguish between modest and strong forms of variance (Section I), explain some key applications (Section II), clear up some misunderstandings and address objections (Section III), and point the way toward future directions of quantifier-variance-related research (Section IV).
Rudolf Carnap famously distinguished between the external meanings that existence questions have when asked by philosophers and the internal meanings they have when asked by non-philosophers. Carnap’s overall position involved various controversial commitments, but relatively uncontroversial interpretative principles also lead to a Carnap-style distinction between internal and external questions. In section 1 of this paper I offer arguments for such a distinction in several particular cases; in section 2 I defend my arguments from numerous objections and motivate them by using (...) points drawn from the general theory of interpretation; and in section 3 I discuss the meanings of external questions, ultimately arguing that they are best understood as involving primitive metaphysical notions, and that when so understood, it is natural to adopt a general error theory about philosophical ontology. (shrink)
The standard account of ontological commitment is quantificational. There are many old and well-chewed-over challenges to the account, but recently Kit Fine added a new challenge. Fine claimed that the ‘‘quantificational account gets the basic logic of ontological commitment wrong’’ and offered an alternative account that used an existence predicate. While Fine’s argument does point to a real lacuna in the standard approach, I show that his own account also gets ‘‘the basic logic of ontological commitment wrong’’. In response, I (...) offer a full quantificational account, using the resources of plural logic, and argue that it leads to a complete theory of natural language ontological commitment. (shrink)
Daniel Dennett’s Consciousness Explained is probably the most widely read book about consciousness ever written by a philosopher. Despite this, the book has had a surprisingly small influence on how most philosophers of mind view consciousness. This might be because many philosophers badly misunderstand the book. They claim it does not even attempt to explain consciousness, but instead denies its very existence. Outside of philosophy the book has had more influence, but is saddled by the same misunderstanding. Now, 30 years (...) after publication, Consciousness Explained deserves reconsideration from anyone interested in consciousness. Here I make a case for this. To start, I will clear up the central misunderstanding of the book. With that done, I will explain and update Dennett’s tantalizing approach to consciousness and the mind. The result brings us very, very close to explaining consciousness. Or so I will argue. (shrink)
Some philosophers are metaphilosophical deflationists for metasemantic reasons. These theorists take standard philosophical assertions to be defective in some manner. There are various versions of metasemantic metaphilosophical deflationism, but a trap awaits any global version of it: metasemantics itself is a part of philosophy, so in deflating philosophy these theorists have thereby deflated the foundation of their deflationism. The present article discusses this issue and the prospects for an adequate response to the trap. Contrary to most historical responses, the article (...) argues that the best response to the trap is to adopt a local but still pervasive metasemantic deflationism. Such a response might seem ad hoc, but the article argues that the human activity of philosophy isn't a natural kind, and that a heterogeneous metaphilosophy of the appropriate kind is well motivated. (shrink)
This paper discusses the relevance of supertask computation for the determinacy of arithmetic. Recent work in the philosophy of physics has made plausible the possibility of supertask computers, capable of running through infinitely many individual computations in a finite time. A natural thought is that, if supertask computers are possible, this implies that arithmetical truth is determinate. In this paper we argue, via a careful analysis of putative arguments from supertask computations to determinacy, that this natural thought is mistaken: supertasks (...) are of no help in explaining arithmetical determinacy. (shrink)
Quantifier variance holds that different languages can have unrestricted quantifier expressions that differ in meaning, where an expression is a “quantifier” just in case it plays the right inferential role. Several critics argued that J.H. Harris’s “collapse” argument refutes variance by showing that identity of inferential role is incompatible with meaning variance. This standard, syntactic collapse argument has generated several responses. More recently, Cian Dorr proved semantic collapse theorems to generate a semantic collapse argument against variance. The argument is significantly (...) different from standard collapse, so it requires a new response. Here I clarify and analyze the semantic collapse argument, and explain how variantists can and should respond to it. The paper also includes an appendix showing the difficulties of positing identity variance without quantifier variance. The argument in the appendix has yet to appear in print, but is familiar to specialists. (shrink)
In many ontological debates there is a familiar challenge. Consider a debate over X s. The “small” or anti-X side tries to show that they can paraphrase the pro-X or “big” side’s claims without any loss of expressive power. Typically though, when the big side adds whatever resources the small side used in their paraphrase, the symmetry breaks down. The big side plus small’s resources is a more expressively powerful and thus more theoretically fruitful theory. In this paper, I show (...) that there is a very general solution to this problem, for the small side. Assuming the resources of set theory, small can successfully paraphrase big. This result depends on a theorem about models of set theory with urelements. After proving this theorem, I discuss some of its philosophical ramifications. (shrink)
The distinction between the a priori and the a posteriori is an old and influential one. But both the distinction itself and the crucial notion of a priori knowledge face powerful philosophical challenges. Many philosophers worry that accepting the a priori is tantamount to accepting epistemic magic. In contrast, this Element argues that the a priori can be formulated clearly, made respectable, and used to do important epistemological work. The author's conception of the a priori and its role falls short (...) of what some historical proponents of the notion may have hoped for, but it allows us to accept and use the notion without abandoning either naturalism or empiricism, broadly understood. This Element argues that we can accept and use the a priori without magic. (shrink)
What is the role of imagination in a priori knowledge? Here I provide a partial answer, arguing that imagination can be used to shed light on which experiences merely enable knowledge, versus which are evidential. I reach this partial answer by considering in detail Timothy’s Williamson’s recent argument that the a priori/a posteriori distinction is insignificant. There are replies to the argument by Boghossian and Casullo that might work on their own terms, but my reply examines the assumptions that Williamson (...) makes about the role of imagination in knowledge generation. I show that Williamson’s argument does not account for important distinctions from recent discussions of imaginative content. When these distinctions are not ignored, we can see that Williamson’s argument attributes a subject knowledge on the basis of a faulty application of universal generalization. I close by connecting my positive account of the role of imagination in the a priori to a debate about the role of memory in the a priori that played out 25 years ago. (shrink)
In the last 35 years many philosophers have appealed to reference magnetism to explain how it is that we mean what we mean. The idea is that it is a constitutive principle of metasemantics that the interpretation that assigns the more natural meanings is correct, ceteris paribus. Among other things, magnetism has been used to answer the challenges of grue and quus, Quine’s indeterminacy of translation argument, and Putnam’s model-theoretic argument against realism. Critics of magnetism have usually objected to the (...) base notion of naturalness. Here I assume naturalness for the sake of argument, but argue that even still, reference magnetism should be rejected. The supposed force of reference magnetism is arbitrarily weak, and the best explanation of this is that it simply does not exist. (shrink)