Paul M. Pietroski presents an ambitious new account of human languages as generative procedures that respect substantive constraints. He argues that meanings are neither concepts nor extensions, and sentences do not have truth conditions; meanings are composable instructions for how to access and assemble concepts of a special sort.
A common view is that ceteris paribus clauses render lawlike statements vacuous, unless such clauses can be explicitly reformulated as antecedents of ?real? laws that face no counterinstances. But such reformulations are rare; and they are not, we argue, to be expected in general. So we defend an alternative sufficient condition for the non-vacuity of ceteris paribus laws: roughly, any counterinstance of the law must be independently explicable, in a sense we make explicit. Ceteris paribus laws will carry a plethora (...) of explanatory commitments; and claims that such commitments are satisfied will be as (dis) confirmable as other empirical claims. (shrink)
The meaning of 'most' can be described in many ways. We offer a framework for distinguishing semantic descriptions, interpreted as psychological hypotheses that go beyond claims about sentential truth conditions, and an experiment that tells against an attractive idea: 'most' is understood in terms of one-to-one correspondence. Adults evaluated 'Most of the dots are yellow', as true or false, on many trials in which yellow dots and blue dots were displayed for 200 ms. Displays manipulated the ease of using a (...) 'one-to-one with remainder' strategy, and a strategy of using the Approximate Number System to compare of (approximations of) cardinalities. Interpreting such data requires care in thinking about how meaning is related to verification. But the results suggest that 'most' is understood in terms of cardinality comparison, even when counting is impossible. (shrink)
In just a few years, children achieve a stable state of linguistic competence, making them effectively adults with respect to: understanding novel sentences, discerning relations of paraphrase and entailment, acceptability judgments, etc. One familiar account of the language acquisition process treats it as an induction problem of the sort that arises in any domain where the knowledge achieved is logically underdetermined by experience. This view highlights the cues that are available in the input to children, as well as childrens skills (...) in extracting relevant information and forming generalizations on the basis of the data they receive. Nativists, on the other hand, contend that language-learners project beyond their experience in ways that the input does not even suggest. Instead of viewing language acqusition as a special case of theory induction, nativists posit a Universal Grammar, with innately specified linguistic principles of grammar formation. The nature versus nurture debate continues, as various poverty of stimulus arguments are challenged or supported by developments in linguistic theory and by findings from psycholinguistic investigations of child language. In light of some recent challenges to nativism, we rehearse old poverty-of stimulus arguments, and supplement them by drawing on more recent work in linguistic theory and studies of child language. (shrink)
A central goal of modern generative grammar has been to discover invariant properties of human languages that reflect “the innate schematism of mind that is applied to the data of experience” and that “might reasonably be attributed to the organism itself as its contribution to the task of the acquisition of knowledge” (Chomsky, 1971). Candidates for such invariances include the structure dependence of grammatical rules, and in particular, certain constraints on question formation. Various “poverty of stimulus” (POS) arguments suggest that (...) these invariances reflect an innate human endowment, as opposed to common experience: Such experience warrants selection of the grammars acquired only if humans assume, a priori, that selectable grammars respect substantive constraints. Recently, several researchers have tried to rebut these POS arguments. In response, we illustrate why POS arguments remain an important source of support for appeal to a priori structure-dependent constraints on the grammars that humans naturally acquire. (shrink)
Theories of content purport to explain, among other things, in virtue of what beliefs have the truth conditions they do have. The desire for such a theory has many sources, but prominent among them are two puzzling facts that are notoriously difficult to explain: beliefs can be false, and there are normative constraints on the formation of beliefs.2 If we knew in virtue of what beliefs had truth conditions, we would be better positioned to explain how it is possible for (...) an agent to believe that which is not the case. Moreover, we do not say merely of such an agent that he believes that p when p is not the case. We say the agent made a mistake, and often criticize him accordingly; we think agents ought not have false beliefs, and that such beliefs should be changed; etc. An adequate theory of content would, presumably, reveal the source of these normative facts about the mental lives of agents. Indeed, it is typically taken to be an adequacy constraint on a theory of content that it help explain the possibility of error and the "normativity" of content. Teleological theories of content promise to do just this. (shrink)
Paul Pietroski presents an original philosophical theory of actions and their mental causes. We often act for reasons: we deliberate and choose among options, based on our beliefs and desires. However, bodily motions always have biochemical causes, so it can seem that thinking and acting are biochemical processes. Pietroski argues that thoughts and deeds are in fact distinct from, though dependent on, underlying biochemical processes within persons.
Paul M. Pietroski, University of Maryland I had heard it said that Chomsky’s conception of language is at odds with the truth-conditional program in semantics. Some of my friends said it so often that the point—or at least a point—finally sunk in.
I argue that linguistic meanings are instructions to build monadic concepts that lie between lexicalizable concepts and truth-evaluable judgments. In acquiring words, humans use concepts of various adicities to introduce concepts that can be fetched and systematically combined via certain conjunctive operations, which require monadic inputs. These concepts do not have Tarskian satisfaction conditions. But they provide bases for refinements and elaborations that can yield truth-evaluable judgments. Constructing mental sentences that are true or false requires cognitive work, not just an (...) exercise of basic linguistic capacities. (shrink)
This paper proposes an Interface Transparency Thesis concerning how linguistic meanings are related to the cognitive systems that are used to evaluate sentences for truth/falsity: a declarative sentence S is semantically associated with a canonical procedure for determining whether S is true; while this procedure need not be used as a verification strategy, competent speakers are biased towards strategies that directly reflect canonical specifications of truth conditions. Evidence in favor of this hypothesis comes from a psycholinguistic experiment examining adult judgments (...) concerning ‘Most of the dots are blue’. This sentence is true if and only if the number of blue dots exceeds the number of nonblue dots. But this leaves unsettled, e.g., how the second cardinality is specified for purposes of understanding and/or verification: via the nonblue things, given a restriction to the dots, as in ‘|{x: Dot(x) & ~Blue(x)}|’; via the blue things, given the same restriction, and subtraction from the number of dots, as in ‘|{x: Dot(x)}| – |{x: Dot(x) & Blue(x)}|’; or in some other way. Psycholinguistic evidence and psychophysical modeling support the second hypothesis. (shrink)
In this comment on Yli-Vakkuri and Hawthorne's illuminating book, Narrow Content, I address some issues related to externalist conceptions of linguistic meaning.
A sentence like every circle is blue might be understood in terms of individuals and their properties or in terms of a relation between groups. Relatedly, theorists can specify the contents of universally quantified sentences in first-order or second-order terms. We offer new evidence that this logical first-order vs. second-order distinction corresponds to a psychologically robust individual vs. group distinction that has behavioral repercussions. Participants were shown displays of dots and asked to evaluate sentences with each, every, or all combined (...) with a predicate. We find that participants are better at estimating how many things the predicate applied to after evaluating sentences in which universal quantification is indicated with every or all, as opposed to each. We argue that every and all are understood in second-order terms that encourage group representation, while each is understood in first-order terms that encourage individual representation. Since the sentences that participants evaluate are truth-conditionally equivalent, our results also bear on questions concerning how meanings are related to truth-conditions. (shrink)
We propose that the generalizations of linguistic theory serve to ascribe beliefs to humans. Ordinary speakers would explicitly (and sincerely) deny having these rather esoteric beliefs about language--e.g., the belief that an anaphor must be bound in its governing category. Such ascriptions can also seem problematic in light of certain theoretical considerations having to do with concept possession, revisability, and so on. Nonetheless, we argue that ordinary speakers believe the propositions expressed by certain sentences of linguistic theory, and that linguistics (...) can therefore teach us something about belief as well as language. Rather than insisting that ordinary speakers lack the linguistic beliefs in question, philosophers should try to show how these empirically motivated belief ascriptions can be correct. We argue that Stalnaker's (1984) "pragmatic" account--according to which beliefs are dispositions, and propositions are sets of possible worlds--does just this. Moreover, our construal of explanation in linguistics motivates (and helps provide) responses to two difficulties for the pragmatic account of belief: the phenomenon of opacity, and the so-called problem of deduction. (shrink)
In a recent paper, Bar-On and Risjord (henceforth, 'B&R') contend that Davidson provides no 1 good argument for his (in)famous claim that "there is no such thing as a language." And according to B&R, if Davidson had established his "no language" thesis, he would thereby have provided a decisive reason for abandoning the project he has long advocated--viz., that of trying to provide theories of meaning for natural languages by providing recursive theories of truth for such languages. For he would (...) have shown that there are no languages to provide truth (or meaning) theories of. Davidson thus seems to be in the odd position of arguing badly for a claim that would undermine his own work. (shrink)
The event analysis of action sentences seems to be at odds with plausible (Davidsonian) views about how to count actions. If Booth pulled a certain trigger, and thereby shot Lincoln, there is good reason for identifying Booths' action of pulling the trigger with his action of shooting Lincoln; but given truth conditions of certain sentences involving adjuncts, the event analysis requires that the pulling and the shooting be distinct events. So I propose that event sortals like 'shooting' and 'pulling' are (...) true of complex events that have actions (and various effects of actions) as parts. Combining this view with some facts about so-called causative verbs, I then argue that paradigmatic actions are best viewed as tryings, where tryings are taken to be intentionally characterized events that typically cause peripheral bodily motions. The proposal turns on a certain conception of what it is to be the Agent of an event; and I conclude by elaborating this conception in the context of some recent discussions about the relation of thematic roles to grammatical categories. (shrink)
The philosophical problem of mental causation concerns a clash between commonsense and scientific views about the causation of human behaviour. On the one hand, commonsense suggests that our actions are caused by our mental states—our thoughts, intentions, beliefs and so on. On the other hand, neuroscience assumes that all bodily movements are caused by neurochemical events. It is implausible to suppose that our actions are causally overdetermined in the same way that the ringing of a bell may be overdetermined by (...) two hammers striking it at the same time. So how are we to reconcile these two views about the causal origins of human behaviour? One philosophical doctrine effects a nice reconciliation. Neuralism, or the token-identity theory, states that every particular mental event is a neurophysiological event and that every action is a physically specifiable bodily movement. If these identities hold, there is no problem of causal overdetermination: the apparently different causal pathways to the behaviour are actually one and the same pathway viewed from different perspectives. This attractively simple view is enjoying a recent revival in fortunes. (shrink)
In the normal course of events, children manifest linguistic competence equivalent to that of adults in just a few years. Children can produce and understand novel sentences, they can judge that certain strings of words are true or false, and so on. Yet experience appears to dramatically underdetermine the com- petence children so rapidly achieve, even given optimistic assumptions about children’s nonlinguistic capacities to extract information and form generalizations on the basis of statistical regularities in the input. These considerations underlie (...) various (more specific) poverty of stimulus arguments for the innate specification of linguistic principles. But in our view, certain features of nativist arguments have not yet been fully appreciated. We focus here on three (related) kinds of poverty of stimulus argument, each of which has been supported by the findings of psycholinguistic investigations of child language. (shrink)
Davidsonian analyses of action reports like ‘Alvin chased Theodore around a tree’ are often viewed as supporting the hypothesis that sentences of a human language H have truth conditions that can be specified by a Tarski-style theory of truth for H. But in my view, simple cases of adverbial modification add to the reasons for rejecting this hypothesis, even though Davidson rightly diagnosed many implications involving adverbs as cases of conjunct-reduction in the scope of an existential quantifier. I think the (...) puzzles in this vicinity reflect “framing effects,” which reveal the implausibility of certain assumptions about how linguistic meaning is related to truth and logical form. We need to replace these assumptions with alternatives, instead of positing implausible values of event-variables or implausible relativizations of truth to linguistic descriptions of actual events. (shrink)
Many concepts, which can be constituents of thoughts, are somehow indicated with words that can be constituents of sentences. But this assumption is compatible with many hypotheses about the concepts lexicalized, linguistic meanings, and the relevant forms of composition. The lexical items simply label the concepts they lexicalize, and that composition of lexical meanings mirrors composition of the labeled concepts, which exhibit diverse adicities. If a phrase must be understood as an instruction to conjoin monadic concepts that correspond to the (...) constituents, lexicalization must be a process in which non-monadic concepts are used to introduce monadic analogues. But given such analogues, along with some thematic concepts, conjunctions can mimic the effect of saturating polyadic concepts. The lexical items efface conceptual adicity distinctions, making it possible to treat a recursive combination of expressions as a sign of monadic predicate conjunction. (shrink)
Chomsky’s (1995, 2000a) Minimalist Program (MP) invites a perspective on semantics that is distinctive and attractive. In section one, I discuss a general idea that many theorists should find congenial: the spoken or signed languages that human children naturally acquire and use— henceforth, human languages—are biologically implemented procedures that generate expressions, whose meanings are recursively combinable instructions to build concepts that reflect a minimal interface between the Human Faculty of Language (HFL) and other cognitive systems. In sections two and three, (...) I develop this picture in the spirit of MP, in part by asking how much of the standard Frege-Tarski apparatus is needed in order to provide adequate and illuminating descriptions of the “concept assembly instructions” that human languages can generate. I’ll suggest that we can make do with relatively little, by treating all phrasal meanings as instructions to assemble number-neutral concepts that are monadic and conjunctive. But the goal is not to legislate what counts as minimal in semantics. Rather, by pursuing one line of Minimalist thought, I hope to show how such thinking can be fruitful. (shrink)
How can a speaker can explain that P without explaining the fact that P, or explain the fact that P without explaining that P, even when it is true (and so a fact) that P? Or in formal mode: what is the semantic contribution of 'explain' such that 'She explained that P' can be true, while 'She explained the fact that P' is false (or vice versa), even when 'P' is true? The proposed answer is that 'explained' is a semantically (...) monadic predicate, satisfied by events of explaining. But 'the fact that P' (a determiner phrase) and 'that P' (a complementizer phrase) get associated with different thematic roles, corresponding to the distinction between a thing explained and the content of a speech act. (shrink)
Words indicate concepts, which have various adicities. But words do not, in general, inherit the adicities of the indicated concepts. Lots of evidence suggests that when a concept is lexicalized, it is linked to an analytically related monadic concept that can be conjoined with others. For example, the dyadic concept CHASE(_,_) might be linked to CHASE(_), a concept that applies to certain events. Drawing on a wide range of extant work, and familiar facts, I argue that the (open class) lexical (...) items of a natural spoken language include neither names nor polyadic predicates. The paper ends with some speculations about the value of a language faculty that would impose uniform monadic analyses on all concepts, including the singular and relational concepts that we share with other animals. (shrink)
Nativists inspired by Chomsky are apt to provide arguments with the following general form: languages exhibit interesting generalizations that are not suggested by casual (or even intensive) examination of what people actually say; correspondingly, adults (i.e., just about anyone above the age of four) know much more about language than they could plausibly have learned on the basis of their experience; so absent an alternative account of the relevant generalizations and speakers' (tacit) knowledge of them, one should conclude that there (...) are substantive "universal" principles of human grammar and, as a result of human biology, children can only acquire languages that conform to these principles. According to Pullum and Scholz, linguists need not suppose that children are innately endowed with "specific contingent facts about natural languages." But Pullum and Scholz don't consider the kinds of facts that really impress nativists. Nor do they offer any plausible acquisition scenarios that would culminate in the acquisition of languages that exhibit the kinds of rich and interrelated generalizations that are exhibited by natural languages. As we stress, good poverty-of-stimulus arguments are based on specific principles - - confirmed by drawing on (negative and crosslinguistic) data unavailable to children -- that help explain a range of independently established linguistic phenomena. If subsequent psycholinguistic experiments show that very young children already know such principles, that strengthens the case for nativism; and if further investigation shows that children sometimes "try out" constructions that are unattested in the local language, but only if such constructions are attested in other human languages, then the case for nativism is made stronger still. We illustrate these points by considering an apparently disparate -- but upon closer inspection, interestingly related -- cluster of phenomena involving: negative polarity items, the interpretation of 'or', binding theory, and displays of Romance and Germanic constructions in child- English.. (shrink)
Paul M. Pietroski, University of Maryland For any sentence of a natural language, we can ask the following questions: what is its meaning; what is its syntactic structure; and how is its meaning related to its syntactic structure? Attending to these questions, as they apply to sentences that provide evidence for Davidsonian event analyses, suggests that we reconsider some traditional views about how the syntax of a natural sentence is related to its meaning.
This paper presents a slightly modified version of the compositional semantics proposed in Events and Semantic Architecture (OUP 2005). Some readers may find this shorter version, which ignores issues about vagueness and causal constructions, easier to digest. The emphasis is on the treatments of plurality and quantification, and I assume at least some familiarity with more standard approaches.
We can use sentences to present arguments, some of which are valid. This suggests that premises and conclusions, like sentences, have structure. This in turn raises questions about how logical structure is related to grammar, and how grammatical structure is related to thought and truth.
Here's one way this chapter could go. After defining the terms 'innate' and 'idea', we say whether Chomsky thinks any ideas are innate -- and if so, which ones. Unfortunately, we don't have any theoretically interesting definitions to offer; and, so far as we know, Chomsky has never said that any ideas are innate. Since saying that would make for a very short chapter, we propose to do something else. Our aim is to locate Chomsky, as he locates himself, in (...) a rationalist tradition where talk of innate ideas has often been used to express the following view: the general character of human thought is due largely to human nature. (shrink)
The general topic of "Mind and World", the written version of John McDowell's 1991 John Locke Lectures, is how `concepts mediate the relation between minds and the world'. And one of the main aims is `to suggest that Kant should still have a central place in our discussion of the way thought bears on reality' (1).1 In particular, McDowell urges us to adopt a thesis that he finds in Kant, or perhaps in Strawson's Kant: the content of experience is conceptualized; (...) _what_ we experience is always the kind of thing that we could also believe. When an agent has a veridical experience, she `takes in, for instance sees, _that things are thus and so_' (9). McDowell's argument for this thesis is indirect, but potentially powerful. He discusses a tension concerning the roles of experience and conceptual capacities in thought, and he claims that the only adequate resolution involves granting that experiences have conceptualized content. The tension, elaborated below, can be expressed roughly as follows: judgments must be somehow constrained by features of the external environment, else judgments would be utterly divorced from the world they purport to be about; yet our judgments must be somehow free of external control, else we could give no sense to the idea that we are responsible for our judgments. (shrink)
The meaning of a noun phrase like ‘brown cow’, or ‘cow that ate grass’, is somehow conjunctive. But conjunctive in what sense? Are the meanings of other phrases—e.g, ‘ate quickly’, ‘ate grass’, and ‘at noon’—similarly conjunctive? I suggest a possible answer, in the context of a broader conception of natural language semantics. But my main aim is to highlight some underdiscussed questions and some implications of our ignorance.