One of the goals of a certain brand of philosopher has been to give an account of language and linguistic phenomena by means of showing how sentences are to be translated into a "logically perspicuous notation" (or an "ideal language"—to use passe terminology). The usual reason given by such philosophers for this activity is that such a notational system will somehow illustrate the "logical form" of these sentences. There are many candidates for this notational system: (almost) ordinary first-order predicate logic (...) (see Quine ), higher-order predicate logic (see Parsons [1968, 1970]), intensional logic (see Montague [1969, 1970a, 1970b, 1971]), and transformational grammar (see Harrnan ), to mention some of the more popular ones. I do not propose to discuss the general question of the correctness of this approach to the philosophy of language, nor do I wish to adjudicate among the notational systems mentioned here. Rather, I want to focus on one problem which must be faced by all such systems—a problem that must be discussed before one decides upon a notational system and tries to demontrate that it in fact can account for all linguistic phenomena. The general problem is to determine what we shall allow as linguistic data; in this paper I shall restrict my attention to this general problem as it appears when we try to account for certain words with non-singular reference, in particular, the words that are classified by the count/ mass and sortal/non-sortal distinctions. (shrink)
In (1991), Meinwald initiated a major change of direction in the study of Plato’s Parmenides and the Third Man Argument. On her conception of the Parmenides , Plato’s language systematically distinguishes two types or kinds of predication, namely, predications of the kind ‘x is F pros ta alla’ and ‘x is F pros heauto’. Intuitively speaking, the former is the common, everyday variety of predication, which holds when x is any object (perceptible object or Form) and F is a property (...) which x exempliﬁes or instantiates in the traditional sense. The latter is a special mode of predication which holds when x is a Form and F is a property which is, in some sense, part of the nature of that Form. Meinwald (1991, p. 75, footnote 18) traces the discovery of this distinction in Plato’s work to Frede (1967), who marks the distinction between pros allo and kath’ hauto predications by placing subscripts on the copula ‘is’. (shrink)
Simple mass nouns are words like ‘water’, ‘furniture’ and ‘gold’. We can form complex mass noun phrases such as ‘dirty water’, ‘leaded gold’ and ‘green grass’. I do not propose to discuss the problems in giving a characterization of the words that are mass versus those that are not. For the purposes of this paper I shall make the following decrees: (a) nothing that is not a noun or noun phrase can be mass, (b) no abstract noun phrases are considered (...) mass, (c) words like ‘thing’, ‘entity’ and ‘object’ are not mass, (d) I shall not consider such words as ‘stuff’, ‘substance or ‘matter’, (e) measures on mass nouns (like ‘gallon of gasoline’, ‘blade of grass’, etc.) are not considered, (f) plurals of count terms are not considered mass. Within these limitations, we can say generally that mass noun phrases are those phrases that ‘much’ can be prefexed to, by ‘many’ cannot be prefexed to, without an0maly.l Semantically, such phrases usually have the property of collectiveness- they are true of any sum of things of which they are true ; and of divisiveness - they are true of any part (down to a certain limit) of things of which they are true. All of this, however, is only ‘generally speaking’ - I shall mostly use only the simple examples given above and ignore the problems in giving a complete characterization of mass nouns. In the paper I want to discuss some problems involved in casting English sentences containing mass nouns into some artificial language; but in order to do this we should have some anchoring framework on which to justify or reject a given proposal. The problem of finding an adequate language can be viewed as a case of translation (from English to the artificial language), where the translation relation must meet certain requirements. I shall suggest five such requirements; others could be added. (shrink)
(although the FOF, unlike the CNF, is still a theorem). The correct version of Problem 62 is (following the format of (Pelletier, 1986)): Natural FOF Negated Conclusion CNF (Ax)r(Pet~(Px m Pf(x))) m Pf(f(x))] Pet Px+ P f(f(x)) + -Pa..
Different researchers use "the philosophy of automated theorem p r o v i n g " t o cover d i f f e r e n t concepts, indeed, different levels of concepts. Some w o u l d count such issues as h o w to e f f i c i e n t l y i n d e x databases as part of the philosophy of automated theorem p r o v i n g . (...) Others wonder about whether f o r m u l a s should be represented as strings or as trees or as lists, and call this part of the philosophy of automated theorem p r o v i n g . Yet others concern themselves w i t h what k i n d o f search should b e embodied i n a n y automated theorem prover, or to what degree any automated theorem prover should resemble Prolog. Still others debate whether natural deduction or semantic tableaux or resolution is " b e t t e r " , a n d c a l l t h i s a part of the p h i l o s o p h y of automated theorem p r o v i n g . Some people wonder whether automated theorem p r o v i n g should be " h u m a n oriented" or "machine o r i e n t e d " — sometimes arguing about whether the internal p r o o f methods should be " h u m a n - I i k e " or not, sometimes arguing about whether the generated proof should be output in a f o r m u n d e r s t a n d a b l e by p e o p l e , and sometimes a r g u i n g a b o u t the d e s i r a b i l i t y o f h u m a n intervention in the process of constructing a proof. There are also those w h o ask such questions as whether we s h o u l d even be concerned w i t h completeness or w i t h soundness of a system, or perhaps we should instead look at very efficient (but i n c o m p l e t e ) subsystems or look at methods of generating models w h i c h might nevertheless validate invalid arguments. A n d a l l of these have been v i e w e d as issues in the philosophy of automated theorem proving. Here, I w o u l d l i k e to step back from such i m p l e m e n t - ation issues and ask: " W h a t do we really think we are doing when we w r i t e an automated theorem prover?" My reflections are perhaps idiosyncratic, but I do think that they put the different researchers* efforts into a broader perspective, and give us some k i n d of handle on w h i c h directions we ourselves m i g h t w i s h to pursue when constructing (or extending) an automated theorem proving system. A logic is defined to be (i) a vocabulary and formation rules ( w h i c h tells us w h a t strings of symbols are w e l l - formed formulas in the logic), and ( i i ) a definition of ' p r o o f in that system ( w h i c h tells us the conditions under which an arrangement of formulas in the system constitutes a proof). Historically speaking, definitions of ' p r o o f have been given in various different manners: the most c o m m o n have been H i l b e r t - s t y l e ( a x i o m a t i c ) , Gentzen-style (consecution, or sequent), F i t c h - s t y l e (natural deduction), and Beth-style (tableaux).. (shrink)
Some utterances of sentences such as ‘Every student failed the midterm exam’ and ‘There is no beer’ are widely held to be true in a conversation despite the facts that not every student in the world failed the midterm exam and that there is, in fact, some beer somewhere. For instance, the speaker might be talking about some particular course, or about his refrigerator. Stanley and Szabó (in Mind and Language v. 15, 2000) consider many different approaches to how contextual (...) information might give meaning to these ‘restricted quantifier domains’, and find all of them but one wanting. The present paper argues that their considerations against one of these other theories, considerations that turn on notions of compositionality, are incorrect. (shrink)
A generic statement is a type of generalization that is made by asserting that a "kind" has a certain property. For example we might hear that marshmallows are sweet. Here, we are talking about the "kind" marshmallow and assert that individual instances of this kind have the property of being sweet. Almost all of our common sense knowledge about the everyday world is put in terms of generic statements. What can make these generic sentences be true even when there are (...) exceptions? A mass term is one that does not "divide its reference;" the word water is a mass term; the word dog is a count term. In a certain vicinity, one can count and identity how many dogs there are, but it doesn't make sense to do that for water--there just is water present. The philosophical literature is rife with examples concerning how a thing can be composed of a mass, such as a statue being composed of clay. Both generic statements and mass terms have led philosophers, linguists, semanticists, and logicians to search for theories to accommodate these phenomena and relationships. The contributors to this interdisciplinary volume study the nature and use of generics and mass terms. Noted researchers in the psychology of language use material from the investigation of human performance and child-language learning to broaden the range of options open for formal semanticists in the construction of their theories, and to give credence to some of their earlier postulations--for instance, concerning different types of predications that are available for true generics and for the role of object recognitions in the development of count vs. mass terms. Relevant data also is described by investigating the ways children learn these sorts of linguistic items: children can learn how to sue generic statements correctly at an early age, and children are adept at individuating objects and distinguishing them from the stuff of which they are made also at an early age. (shrink)
Natural deduction is the type of logic most familiar to current philosophers, and indeed is all that many modern philosophers know about logic. Yet natural deduction is a fairly recent innovation in logic, dating from Gentzen and Ja?kowski in 1934. This article traces the development of natural deduction from the view that these founders embraced to the widespread acceptance of the method in the 1960s. I focus especially on the different choices made by writers of elementary textbooks?the standard conduits of (...) the method to a generation of philosophers?with an eye to determining what the ?essential characteristics? of natural deduction are. (shrink)
In an interesting experimental study, Bonini et al. (1999) present partial support for truth-gap theories of vagueness. We say this despite their claim to find theoretical and empirical reasons to dismiss gap theories and despite the fact that they favor an alternative, epistemic account, which they call ‘vagueness as ignorance’. We present yet more experimental evidence that supports gap theories, and argue for a semantic/pragmatic alternative that unifies the gappy supervaluationary approach together with its glutty relative, the subvaluationary approach.
In an attempt to address the theoretical gap between linguistics and philosophy, a group of semanticists, calling itself the Generic Group, has worked to develop a common view of genericity. Their research has resulted in this book, which consists of a substantive introduction and eleven original articles on important aspects of the interpretation of generic expressions. The introduction provides a clear overview of the issues and synthesizes the major analytical approaches to them. Taken together, the papers that follow reflect the (...) current state of the art in the semantics of generics, and afford insight into various generic phenomena. (shrink)
1: Linguistic and Epistemological Background 1 . 1 : Generic Reference vs. Generic Predication 1 . 2 : Why are there any Generic Sentences at all? 1 . 3 : Generics and Exceptions, Two Bad Attitudes 1 . 4 : Exceptions and Generics, Some Other Attitudes 1 . 5 : Generics and Intensionality 1 . 6 : Goals of an Analysis of Generic Sentences 1 . 7 : A Little Notation 1 . 8 : Generics vs. Explicit Statements of Regularities..
previous theories and the relevance of those criticisms to the new accounts. Additionally, we have included a new section at the end, which gives some directions to literature outside of formal semantics in which the notion of mass has been employed. We looked at work on mass expressions in psycholinguistics and computational linguistics here, and we discussed some research in the history of philosophy and in metaphysics that makes use of the notion of mass.
The Principle of Semantic Compositionality (sometimes called Frege''s Principle) is the principle that the meaning of a (syntactically complex) whole is a function only of the meanings of its (syntactic) parts together with the manner in which these parts were combined. This principle has been extremely influential throughout the history of formal semantics; it has had a tremendous impact upon modern linguistics ever since Montague Grammars became known; and it has more recently shown up as a guiding principle for a (...) certain direction in cognitive science.Despite the fact that The Principle is vague or underspecified at a number of points — such as what meaning is, what counts as a part, what counts as a syntactic complex, what counts as combination — this has not stopped some people from viewing The Principle as obviously true, true almost by definition. And it has not stopped other people from viewing The Principle as false, almost pernicious in its effect. And some of these latter theorists think that it is an empirically false principle while others think of it as a methodologically wrong-headed way to proceed. (shrink)
Gentzen’s and Jaśkowski’s formulations of natural deduction are logically equivalent in the normal sense of those words. However, Gentzen’s formulation more straightforwardly lends itself both to a normalization theorem and to a theory of “meaning” for connectives . The present paper investigates cases where Jaskowski’s formulation seems better suited. These cases range from the phenomenology and epistemology of proof construction to the ways to incorporate novel logical connectives into the language. We close with a demonstration of this latter aspect by (...) considering a Sheffer function for intuitionistic logic. (shrink)
This paper discusses the general problem of translation functions between logics, given in axiomatic form, and in particular, the problem of determining when two such logics are "synonymous" or "translationally equivalent." We discuss a proposed formal definition of translational equivalence, show why it is reasonable, and also discuss its relation to earlier definitions in the literature. We also give a simple criterion for showing that two modal logics are not translationally equivalent, and apply this to well-known examples. Some philosophical morals (...) are drawn concerning the possibility of having two logical systems that are "empirically distinct" but are both translationally equivalent to a common logic. (shrink)
In this essay I will consider two theses that are associated with Frege,and will investigate the extent to which Frege really believed them.Much of what I have to say will come as no surprise to scholars of thehistorical Frege. But Frege is not only a historical figure; he alsooccupies a site on the philosophical landscape that has allowed hisdoctrines to seep into the subconscious water table. And scholars in a widevariety of different scholarly establishments then sip from thesedoctrines. I believe (...) that some Frege-interested philosophers at various ofthese establishments might find my conclusions surprising.Some of these philosophical establishments have arisen from an educationalmilieu in which Frege is associated with some specific doctrine at theexpense of not even being aware of other milieux where other specificdoctrines are given sole prominence. The two theses which I will discussillustrate this point. Each of them is called Frege''s Principle, but byphilosophers from different milieux. By calling them milieux I do not want to convey the idea that they are each located at some specificsocio-politico-geographico-temporal location. Rather, it is a matter oftheir each being located at different places on the intellectuallandscape. For this reason one might (and I sometimes will) call them(interpretative) traditions. (shrink)
Default reasoning occurs whenever the truth of the evidence available to the reasoner does not guarantee the truth of the conclusion being drawn. Despite this, one is entitled to draw the conclusion “by default” on the grounds that we have no information which would make us doubt that the inference should be drawn. It is the type of conclusion we draw in the ordinary world and ordinary situations in which we find ourselves. Formally speaking, ‘nonmonotonic reasoning’ refers to argumentation in (...) which one uses certain information to reach a conclusion, but where it is possible that adding some further information to those very same premises could make one want to retract the original conclusion. It is easily seen that the informal notion of default reasoning manifests a type of nonmonotonic reasoning. Generally speaking, default statements are said to be true about the class of objects they describe, despite the acknowledged existence of “exceptional instances” of the class. In the absence of explicit information that an object is one of the exceptions we are enjoined to apply the default statement to the object. But further information may later tell us that the object is in fact one of the exceptions. So this is one of the points where nonmonotonicity resides in default reasoning. (shrink)
This paper investigates the strange case of an argument that was directed against a positivist verification principle. We find an early occurrence of the argument in a talk by the phenomenologist Roman Ingarden at the 1934 International Congress of Philosophy in Prague, where Carnap and Neurath were present and contributed short rejoinders. We discuss the underlying presuppositons of the argument, and we evaluate whether the attempts by Carnap actually succeed in answering this argument. We think they don’t, and offer instead (...) a few sociological thoughts about why the argument seems to have disappeared from the profession’s evaluaton of the positivist criterion of verifiability. (shrink)
Strawson described ‘descriptive metaphysics’, Bach described ‘natural language metaphysics’, Sapir and Whorf describe, well, Sapir-Whorﬁanism. And there are other views concerning the relation between correct semantic analysis of linguistic phenomena and the “reality” that is supposed to be thereby described. I think some considerations from the analyses of the mass-count distinction can shed some light on that very dark topic.
In 1934 a most singular event occurred. Two papers were published on a topic that had (apparently) never before been written about, the authors had never been in contact with one another, and they had (apparently) no common intellectual background that would otherwise account for their mutual interest in this topic.1 These two papers formed the basis for a movement in logic which is by now the most common way of teaching elementary logic by far, and indeed is perhaps all (...) that is known in any detail about logic by a number of philosophers (especially in North America). This manner of proceeding in logic is called ‘natural deduction’. And in its own way the instigation of this style of logical proof is as important to the history of logic as the discovery of resolution by Robinson in 1965, or the discovery of the logistical method by Frege in 1879, or even the discovery of the syllogistic by Aristotle in the fourth century BC. (shrink)
Average‐NPs, such as the one in the title of this paper, have been claimed to be ‘linguistically identical’ to any other definite‐NPs but at the same time to be ‘semantically inconsistent’ with these other definite‐NPs. To some this is an ironclad proof of the irrelevance of semantics to linguistics. We argue that both of the initial claims are wrong: average‐NPs are not ‘linguistically identical’ to other definite‐NPs but instead show a number of interesting divergences, and we provide a plausible semantic (...) account for them that is not ‘semantically inconsistent’ with the account afforded other definite‐NPs but in fact blends quite nicely with one standard account of the semantics for NPs. (shrink)
This volume showcases an interplay between leading philosophical and linguistic semanticists on the one side, and leading cognitive and developmental psychologists on the other side. The topic is a class of outstanding questions in the semanticists on the one side, and leading cognitive and developmental psychologists on the other side. The topic is a class of outstanding questions in the semantic and logical theories of generic statements and statements that employ mass terms by looking to the cognitive abilities of speakers (...) and of child language-learners. (shrink)
The logic of paradox, LP, is a first-order, three-valued logic that has been advocated by Graham Priest as an appropriate way to represent the possibility of acceptable contradictory statements. Second-order LP is that logic augmented with quantification over predicates. As with classical second-order logic, there are different ways to give the semantic interpretation of sentences of the logic. The different ways give rise to different logical advantages and disadvantages, and we canvass several of these, concluding that it will be extremely (...) difficult to appeal to second-order LP for the purposes that its proponents advocate, until some deep, intricate, and hitherto unarticulated metaphysical advances are made. (shrink)
Fuzzy logics are systems of logic with infinitely many truth values. Such logics have been claimed to have an extremely wide range of applications in linguistics, computer technology, psychology, etc. In this note, we canvass the known results concerning infinitely many valued logics; make some suggestions for alterations of the known systems in order to accommodate what modern devotees of fuzzy logic claim to desire; and we prove some theorems to the effect that there can be no fuzzy logic which (...) will do what its advocates want. Finally, we suggest ways to accommodate these desires in finitely many valued logics. (shrink)