This volume on the semantic complexity of natural language explores the question why some sentences are more difficult than others. While doing so, it lays the groundwork for extending semantic theory with computational and cognitive aspects by combining linguistics and logic with computations and cognition. -/- Quantifier expressions occur whenever we describe the world and communicate about it. Generalized quantifier theory is therefore one of the basic tools of linguistics today, studying the possible meanings and the inferential power of quantifier (...) expressions by logical means. The classic version was developed in the 1980s, at the interface of linguistics, mathematics and philosophy. Before this volume, advances in "classic" generalized quantifier theory mainly focused on logical questions and their applications to linguistics, this volume adds a computational component, the third pillar of language use and logical activity. This book is essential reading for researchers in linguistics, philosophy, cognitive science, logic, AI, and computer science. (shrink)
We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality.<br>In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon hypothesis and (...) explanatory power of recent neuroimaging studies as well as provides<br>evidence. (shrink)
One of the great successes of the application of generalized quantifiers to natural language has been the ability to formulate robust semantic universals. When such a universal is attested, the question arises as to the source of the universal. In this paper, we explore the hypothesis that many semantic universals arise because expressions satisfying the universal are easier to learn than those that do not. While the idea that learnability explains universals is not new, explicit accounts of learning that can (...) make good on this hypothesis are few and far between. We propose a model of learning — back-propagation through a recurrent neural network — which can make good on this promise. In particular, we discuss the universals of monotonicity, quantity, and conservativity and perform computational experiments of training such a network to learn to verify quantifiers. Our results are able to explain monotonicity and quantity quite well. We suggest that conservativity may have a different source than the other universals. (shrink)
In the dissertation we study the complexity of generalized quantifiers in natural language. Our perspective is interdisciplinary: we combine philosophical insights with theoretical computer science, experimental cognitive science and linguistic theories. -/- In Chapter 1 we argue for identifying a part of meaning, the so-called referential meaning (model-checking), with algorithms. Moreover, we discuss the influence of computational complexity theory on cognitive tasks. We give some arguments to treat as cognitively tractable only those problems which can be computed in polynomial time. (...) Additionally, we suggest that plausible semantic theories of the everyday fragment of natural language can be formulated in the existential fragment of second-order logic. -/- In Chapter 2 we give an overview of the basic notions of generalized quantifier theory, computability theory, and descriptive complexity theory. -/- In Chapter 3 we prove that PTIME quantifiers are closed under iteration, cumulation and resumption. Next, we discuss the NP-completeness of branching quantifiers. Finally, we show that some Ramsey quantifiers define NP-complete classes of finite models while others stay in PTIME. We also give a sufficient condition for a Ramsey quantifier to be computable in polynomial time. -/- In Chapter 4 we investigate the computational complexity of polyadic lifts expressing various readings of reciprocal sentences with quantified antecedents. We show a dichotomy between these readings: the strong reciprocal reading can create NP-complete constructions, while the weak and the intermediate reciprocal readings do not. Additionally, we argue that this difference should be acknowledged in the Strong Meaning hypothesis. -/- In Chapter 5 we study the definability and complexity of the type-shifting approach to collective quantification in natural language. We show that under reasonable complexity assumptions it is not general enough to cover the semantics of all collective quantifiers in natural language. The type-shifting approach cannot lead outside second-order logic and arguably some collective quantifiers are not expressible in second-order logic. As a result, we argue that algebraic (many-sorted) formalisms dealing with collectivity are more plausible than the type-shifting approach. Moreover, we suggest that some collective quantifiers might not be realized in everyday language due to their high computational complexity. Additionally, we introduce the so-called second-order generalized quantifiers to the study of collective semantics. -/- In Chapter 6 we study the statement known as Hintikka's thesis: that the semantics of sentences like ``Most boys and most girls hate each other'' is not expressible by linear formulae and one needs to use branching quantification. We discuss possible readings of such sentences and come to the conclusion that they are expressible by linear formulae, as opposed to what Hintikka states. Next, we propose empirical evidence confirming our theoretical predictions that these sentences are sometimes interpreted by people as having the conjunctional reading. -/- In Chapter 7 we discuss a computational semantics for monadic quantifiers in natural language. We recall that it can be expressed in terms of finite-state and push-down automata. Then we present and criticize the neurological research building on this model. The discussion leads to a new experimental set-up which provides empirical evidence confirming the complexity predictions of the computational model. We show that the differences in reaction time needed for comprehension of sentences with monadic quantifiers are consistent with the complexity differences predicted by the model. -/- In Chapter 8 we discuss some general open questions and possible directions for future research, e.g., using different measures of complexity, involving game-theory and so on. -/- In general, our research explores, from different perspectives, the advantages of identifying meaning with algorithms and applying computational complexity analysis to semantic issues. It shows the fruitfulness of such an abstract computational approach for linguistics and cognitive science. (shrink)
We study the computational complexity of polyadic quantifiers in natural language. This type of quantification is widely used in formal semantics to model the meaning of multi-quantifier sentences. First, we show that the standard constructions that turn simple determiners into complex quantifiers, namely Boolean operations, iteration, cumulation, and resumption, are tractable. Then, we provide an insight into branching operation yielding intractable natural language multi-quantifier expressions. Next, we focus on a linguistic case study. We use computational complexity results to investigate semantic (...) distinctions between quantified reciprocal sentences. We show a computational dichotomy<br>between different readings of reciprocity. Finally, we go more into philosophical speculation on meaning, ambiguity and computational complexity. In particular, we investigate a possibility to<br>revise the Strong Meaning Hypothesis with complexity aspects to better account for meaning shifts in the domain of multi-quantifier sentences. The paper not only contributes to the field of the formal<br>semantics but also illustrates how the tools of computational complexity theory might be successfully used in linguistics and philosophy with an eye towards cognitive science. (shrink)
We consider the notion of everyday language. We claim that everyday language is semantically bounded by the properties expressible in the existential fragment of second–order logic. Two arguments for this thesis are formulated. Firstly, we show that so–called Barwise's test of negation normality works properly only when assuming our main thesis. Secondly, we discuss the argument from practical computability for finite universes. Everyday language sentences are directly or indirectly verifiable. We show that in both cases they are bounded by second–order (...) existential properties. Moreover, there are known examples of everyday language sentences which are the most difficult in this class (NPTIME–complete). (shrink)
Next SectionWe discuss the thesis formulated by Hintikka (1973) that certain natural language sentences require non-linear quantification to express their meaning. We investigate sentences with combinations of quantifiers similar to Hintikka's examples and propose a novel alternative reading expressible by linear formulae. This interpretation is based on linguistic and logical observations. We report on our experiments showing that people tend to interpret sentences similar to Hintikka sentence in a way consistent with our interpretation.
We analyse the computational complexity of comparing informational structures. Intuitively, we study the complexity of deciding queries such as the following: Is Alice’s epistemic information strictly coarser than Bob’s? Do Alice and Bob have the same knowledge about each other’s knowledge? Is it possible to manipulate Alice in a way that she will have the same beliefs as Bob? The results show that these problems lie on both sides of the border between tractability (P) and intractability (NP-hard). In particular, we (...) investigate the impact of assuming information structures to be partition-based (rather than arbitrary relational structures) on the complexity of various problems. We focus on the tractability of concrete epistemic tasks and not on epistemic logics describing them. (shrink)
We compared the processing of natural language quantifiers in a group of patients with schizophrenia and a healthy control group. In both groups, the difficulty of the quantifiers was consistent with computational predictions, and patients with schizophrenia took more time to solve the problems. However, they were significantly less accurate only with proportional quantifiers, like more than half. This can be explained by noting that, according to the complexity perspective, only proportional quantifiers require working memory engagement.
The paper presents an experimental evidence on differences in the sentence-picture verification under additional memory load between parity and proportional quantifiers. We asked subjects to memorize strings of 4 or 6 digits, then to decide whether a quantifier sentence is true at a given picture, and finally to recall the initially given string of numbers. The results show that: (a) proportional quantifiers are more difficult than parity quantifiers with respect to reaction time and accuracy; (b) maintaining either 4 or 6 (...) elements in the working memory has the same effect on the processing of parity quantifiers; (c) however, in the case of proportional quantifiers subjects performed better in the verification tasks under the 6-digit load condition, and (d) even though the strings of 4 numbers were better recalled by subjects after judging parity there is no difference between quantifiers in the case of the 6-element condition. We briefly outline two alternative explanations for the observed phenomena rooted in the computational model of quantifier verification and the different theories of working memory. (shrink)
We discuss McMillan et al. (2005) paper devoted to study brain activity during comprehension of sentences with generalized quantifiers. According to the authors their results verify a particular computational model of natural language quantifier comprehension posited by several linguists and logicians (e. g. see van Benthem, 1986). We challenge this statement by invoking the computational difference between first-order quantifiers and divisibility quantifiers (e. g. see Mostowski, 1998). Moreover, we suggest other studies on quantifier comprehension, which can throw more light on (...) the role of working memory in processing quantifiers. (shrink)
We overview logical and computational explanations of the notion of tractability as applied in cognitive science. We start by introducing the basics of mathematical theories of complexity: computability theory, computational complexity theory, and descriptive complexity theory. Computational philosophy of mind often identifies mental algorithms with computable functions. However, with the development of programming practice it has become apparent that for some computable problems finding effective algorithms is hardly possible. Some problems need too much computational resource, e.g., time or memory, to (...) be practically computable. Computational complexity theory is concerned with the amount of resources required for the execution of algorithms and, hence, the inherent difficulty of computational problems. An important goal of computational complexity theory is to categorize computational problems via complexity classes, and in particular, to identify efficiently solvable problems and draw a line between tractability and intractability. -/- We survey how complexity can be used to study computational plausibility of cognitive theories. We especially emphasize methodological and mathematical assumptions behind applying complexity theory in cognitive science. We pay special attention to the examples of applying logical and computational complexity toolbox in different domains of cognitive science. We focus mostly on theoretical and experimental research in psycholinguistics and social cognition. (shrink)
The paper presents a study examining the role of working<br>memory in quantifier verification. We created situations similar to the<br>span task to compare numerical quantifiers of low and high rank, parity<br>quantifiers and proportional quantifiers. The results enrich and support<br>the data obtained previously in and predictions drawn from a computational<br>model.
We discuss the thesis formulated by Hintikka that certain natural language sentences require non-linear quantification to express their meaning. We investigate sentences with combinations of quantifiers similar to Hintikka's examples and propose a novel alternative reading expressible by linear formulae. This interpretation is based on linguistic and logical observations. We report on our experiments showing that people tend to interpret sentences similar to Hintikka sentence in a way consistent with our interpretation.
The paper presents two case studies of multi-agent information exchange involving generalized quantifiers. We focus on scenarios in which agents successfully converge to knowledge on the basis of the information about the knowledge of others, so-called Muddy Children puzzle and Top Hat puzzle. We investigate the relationship between certain invariance properties of quantifiers and the successful convergence to knowledge in such situations. We generalize the scenarios to account for public announcements with arbitrary quantifiers. We show that the Muddy Children puzzle (...) is solvable for any number of agents if and only if the quantifier in the announcement is positively active (satisfies a version of the variety condition). In order to get the characterization result, we propose a new concise logical modeling of the puzzle based on the number triangle representation of generalized quantifiers. In a similar vein, we also study the Top Hat puzzle. We observe that in this case an announcement needs to satisfy stronger conditions in order to guarantee solvability. Hence, we introduce a new property, called bounded thickness, and show that the solvability of the Top Hat puzzle for arbitrary number of agents is equivalent to the announcement being 1-thick. (shrink)
We consider collective quantification in natural language. For many years the common strategy in formalizing collective quantification has been to define the meanings of collective determiners, quantifying over collections, using certain type-shifting operations. These type-shifting operations, i.e., lifts, define the collective interpretations of determiners systematically from the standard meanings of quantifiers. All the lifts considered in the literature turn out to be definable in second-order logic. We argue that second-order definable quantifiers are probably not expressive enough to formalize all collective (...) quantification in natural language. (shrink)
We study a generalization of the Muddy Children puzzle by allowing public announcements with arbitrary generalized quantifiers. We propose a new concise logical modeling of the puzzle based on the number triangle representation of quantifi ers. Our general aim is to discuss the possibility of epistemic modeling that is cut for specifi c informational dynamics. Moreover, we show that the puzzle is solvable for any number of agents if and only if the quanti fier in the announcement is positively active (...) (satis es a form of variety). (shrink)
Szymanik (2007) suggested that the distinction between first-order and higher-order quantifiers does not coincide with the computational resources required to compute the meaning of quantifiers. Cognitive difficulty of quantifier processing might be better assessed on the basis of complexity of the minimal corresponding automata. For example, both logical and numerical quantifiers are first-order. However, computational devices recognizing logical quantifiers have a fixed number of states while the number of states in automata corresponding to numerical quantifiers grows with the rank of (...) the quantifier. This observation partially explains the differences in processing between those two types of quantifiers (Troiani et al. 2009) and links them to the computational model. Taking this perspective, below, we suggest the experimental setting extending the one by McMillan et al. (2005) and Troiani et al. (2009). (shrink)
Natural language sentences that talk about two or more sets of entities can be assigned various readings. The ones in which the sets are independent of one another are particularly challenging from the formal point of view. In this paper we will call them ‘Independent Set (IS) readings’. Cumulative and collective readings are paradigmatic examples of IS readings. Most approaches aiming at representing the meaning of IS readings implement some kind of maximality conditions on the witness sets involved. Two kinds (...) of maximization have been proposed in the literature: ‘Local’ and ‘Global’ maximization. In this paper, we present an online questionnaire whose results appear to support Local maximization. The latter seems to capture the proper interplay between the semantics and the pragmatics of multi-quantifier sentences, provided that witness sets are selected on pragmatic grounds. (shrink)
We discuss Hintikka’s Thesis [Hintikka 1973] that there exist natural language sentences which require non–linear quantification to express their logical form.
Theory of mind refers to the human capacity for reasoning about others’ mental states based on observations of their actions and unfolding events. This type of reasoning is notorious in the cognitive science literature for its presumed computational intractability. A possible reason could be that it may involve higher-order thinking. To investigate this we formalize theory of mind reasoning as updating of beliefs about beliefs using dynamic epistemic logic, as this formalism allows to parameterize ‘order of thinking.’ We prove that (...) theory of mind reasoning, so formalized, indeed is intractable. Using parameterized complexity we prove, however, that the ‘order parameter’ is not a source of intractability. We furthermore consider a set of alternative parameters and investigate which of them are sources of intractability. We discuss the implications of these results for the understanding of theory of mind. (shrink)
The problem of computational complexity of semantics for some natural language constructions – considered in [M. Mostowski, D. Wojtyniak 2004] – motivates an interest in complexity of Ramsey quantifiers in finite models. In general a sentence with a Ramsey quantifier R of the following form Rx, yH(x, y) is interpreted as ∃A(A is big relatively to the universe ∧A2 ⊆ H). In the paper cited the problem of the complexity of the Hintikka sentence is reduced to the problem of computational (...) complexity of the Ramsey quantifier for which the phrase “A is big relatively to the universe” is interpreted as containing at least one representative of each equivalence class, for some given equvalence relation. In this work we consider quantifiers Rf, for which “A is big relatively to the universe” means “card(A) > f (n), where n is the size of the universe”. Following [Blass, Gurevich 1986] we call R mighty if Rx, yH(x, y) defines N P – complete class of finite models. Similarly we say that Rf is N P –hard if the corresponding class is N P –hard. We prove the following theorems. (shrink)
In three experiments, we investigated the computational complexity of German reciprocal sentences with different quantificational antecedents. Building upon the tractable cognition thesis (van Rooij, 2008) and its application to the verification of quantifiers (Szymanik, 2010) we predicted complexity differences among these sentences. Reciprocals with all-antecedents are expected to preferably receive a strong interpretation (Dalrymple et al., 1998), but reciprocals with proportional or numerical quantifier antecedents should be interpreted weakly. Experiment 1, where participants completed pictures according to their preferred interpretation, provides (...) evidence for these predictions. Experiment 2 was a picture verification task. The results show that the strong interpretation was in fact possible for tractable all but one-reciprocals, but not for exactly n. The last experiment manipulated monotonicity of the quantifier antecedents. (shrink)
Quantifying determiners most and more than half are standardly assumed to have the same truth-conditional meaning. Much work builds on this assumption in studying how the two quantifiers are mentally encoded and processed. There is however empirical evidence that most is sometimes interpreted as ‘significantly more than half’. Is this difference between most and more than half a pragmatic effect, or is the standard assumption that the two quantifiers are truth-conditionally equivalent wrong? We report two experiments which demonstrate that most (...) preserves the ‘significantly more than half’ interpretation in negative environments, which we argue to speak in favor of there being a difference between the two quantifiers at the level of truth conditions. (shrink)
Among the readings available for NL sentences, those where two or more sets of entities are independent of one another are particularly challenging from both a theoretical and an empirical point of view. Those readings are termed here as ‘Independent Set (IS) readings'. Standard examples of such readings are the well-known Collective and Cumulative Readings. (Robaldo, 2011) proposes a logical framework that can properly represent the meaning of IS readings in terms of a set-Skolemization of the witness sets. One of (...) the main assumptions of Robaldo's logical framework, drawn from (Schwarzschild, 1996), is that pragmatics plays a crucial role in the identification of such witness sets. Those are firstly identified on pragmatic grounds, then logical clauses are asserted on them in order to trigger the appropriate inferences. In this paper, we present the results of an experimental analysis that appears to confirm Robaldo's hypotheses concerning the pragmatic identification of the witness sets. (shrink)
The explanatory power of logic is vast and therefore it has proved a valuable tool for many disciplines, including the building-blocks of cognitive science, such as philosophy, computer science, mathematics, artificial intelligence, and linguistics. Logic has a great track record in providing interesting insights by means of formalization, and as such it is very useful in disambiguating psychological theories. Logically formalized cognitive theories are not only the source of unequivocal experimental hypotheses, but they also lend themselves naturally to computational modeling. (...) Most importantly, modern logic has at its service a rich variety of tools to assess and compare such psychological theories. This toolbox can be utilized in evaluating cognitive models along the following dimensions:logical relationships, for example, incompatibility or identity of models;explanatory power, for example, what can be expressed by means of a model?computational plausibility, for example, a. (shrink)
We study definability of second-order generalized quantifiers. We show that the question whether a second-order generalized quantifier $\sQ_1$ is definable in terms of another quantifier $\sQ_2$, the base logic being monadic second-order logic, reduces to the question if a quantifier $\sQ^{\star}_1$ is definable in $\FO(\sQ^{\star}_2,<,+,\times)$ for certain first-order quantifiers $\sQ^{\star}_1$ and $\sQ^{\star}_2$. We use our characterization to show new definability and non-definability results for second-order generalized quantifiers. In particular, we show that the monadic second-order majority quantifier $\most^1$ is not definable (...) in second-order logic. (shrink)
This paper surveys applications of logical methods in the cognitive sciences. Special attention is paid to non-monotonic logics and complexity theory. We argue that these particular tools have been useful in clarifying the debate between symbolic and connectionist models of cognition.
We study the computational complexity of reciprocal sentences with quantified antecedents. We observe a computational dichotomy between different interpretations of reciprocity, and shed some light on the status of the so-called Strong Meaning Hypothesis.
In three experiments, we investigated the computational complexity of German reciprocal sentences with different quantificational antecedents. Building upon the tractable cognition thesis (van Rooij, 2008) and its application to the verification of quantifiers (Szymanik, 2010) we predicted complexity differences among these sentences. Reciprocals with all-antecedents are expected to preferably receive a strong interpretation (Dalrymple et al., 1998), but reciprocals with proportional or numerical quantifier antecedents should be interpreted weakly. Experiment 1, where participants completed pictures according to their preferred interpretation, provides (...) evidence for these predictions. Experiment 2 was a picture verification task. The results show that the strong interpretation was in fact possible for tractable all but one-reciprocals, but not for exactly n. The last experiment manipulated monotonicity of the quantifier antecedents. (shrink)
We compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and pushdown automata is psychologically relevant. Our research improves upon hypothesis and explanatory power of recent neuroimaging studies as well as provides evidence for the claim that human linguistic abilities are constrained by computational complexity.
The examination of quantifiers plays an essential role in modern linguistic theories. One of the most important issues in this respect was raised by Jaakko Hintikka, who proposed the following thesis: Certain natural language sentences require essential non-linear quantification to adequately express their logical form.
One of the interesting problems in the theory of language is the problem of describing and explaining the mechanisms responsible for our ability to understand sentences. A description of the mechanism of linguistic competence, which we can refer to as semantic competence, is necessary for understanding the phenomenon of language. For to use a language is not only to use a certain vocabulary and grammatical rules, but most of all to associate certain meanings with certain expressions. For example, when I (...) say rana, it is the intended meaning that decides whether I use the Polish or the Latin language — i.e. whether what I had in mind was a frog or a wound. (shrink)