Open peer commentary on the target article “How and Why the Brain Lays the Foundations for a Conscious Self” by Martin V. Butz. Excerpt: In this commentary to Martin V. Butz’s target article I am especially concerned with his remarks about language (§33, §§71–79, §91) and modularity (§32, §41, §48, §81, §§94–98). In that context, I would like to bring into discussion my own work on computational models of self-monitoring (cf. Neumann 1998, 2004). In this work I explore the (...) idea of an anticipatory drive as a substantial control device for modelling high-level complex language processes such as selfmonitoring and adaptive language use. My work is grounded in computationallinguistics and, as such, uses a mathematical and computational methodology. Nevertheless, it might provide some interesting aspects and perspectives for constructivism in general, and the model proposed in Butz’s article, in particular. (shrink)
This book constitutes the thoroughly refereed post-proceedings of the Second International Conference on Logical Aspects of ComputationalLinguistics, LACL '97, held in Nancy, France in September 1997. The 10 revised full papers presented were carefully selected during two rounds of reviewing. Also included are two comprehensive invited papers. Among the topics covered are type theory, various types of grammars, linear logic, parsing, type-directed natural language processing, proof-theoretic aspects, concatenation logics, and mathematical languages.
This paper reports a procedure which I employed with two computational research instruments, the Index Thomisticus and its companion St. Thomas CD-ROM, in order to research the Thomistic axiom, ‘whatever is received is received according to the mode of the receiver.’ My procedure extends to the lexicological methods developed by the pioneering creator of the Index, Roberto Busa, from single terms to a proposition. More importantly, the paper shows how the emerging results of the lexicological searches guided my formation (...) of a philosophical thesis about the axiom’s import for Aquinas’s existential metaphysics. (shrink)
This book constitutes the thoroughly refereed post-proceedings of the Third International Conference on Logical Aspects of ComputationalLinguistics, LACL'98, held in Grenoble, France, in December 1998. The 15 revised full papers presented together with one invited paper were carefully reviewed and selected during two rounds of refereeing from 33 submissions and 19 conference presentations. Among the topics covered are various types of grammars, categorical inference, automated reasoning, constraint handling, logical forms, dialogue semantics, unification, and proofs.
Computational techniques comparing co-occurrences of city names in texts allow the relative longitudes and latitudes of cities to be estimated algorithmically. However, these techniques have not been applied to estimate the provenance of artifacts with unknown origins. Here, we estimate the geographic origin of artifacts from the Indus Valley Civilization, applying methods commonly used in cognitive science to the Indus script. We show that these methods can accurately predict the relative locations of archeological sites on the basis of artifacts (...) of known provenance, and we further apply these techniques to determine the most probable excavation sites of four sealings of unknown provenance. These findings suggest that inscription statistics reflect historical interactions among locations in the Indus Valley region, and they illustrate how computational methods can help localize inscribed archeological artifacts of unknown origin. The success of this method offers opportunities for the cognitive sciences in general and for computational anthropology specifically. (shrink)
There is currently much interest in bringing together the tradition of categorial grammar, and especially the Lambek calculus, with the recent paradigm of linear logic to which it has strong ties. One active research area is designing non-commutative versions of linear logic (Abrusci, 1995; Retoré, 1993) which can be sensitive to word order while retaining the hypothetical reasoning capabilities of standard (commutative) linear logic (Dalrymple et al., 1995). Some connections between the Lambek calculus and computations in groups have long been (...) known (van Benthem, 1986) but no serious attempt has been made to base a theory of linguistic processing solely on group structure. This paper presents such a model, and demonstrates the connection between linguistic processing and the classical algebraic notions of non-commutative free group, conjugacy, and group presentations. A grammar in this model, or G-grammar is a collection of lexical expressions which are products of logical forms, phonological forms, and inverses of those. Phrasal descriptions are obtained by forming products of lexical expressions and by cancelling contiguous elements which are inverses of each other. A G-grammar provides a symmetrical specification of the relation between a logical form and a phonological string that is neutral between parsing and generation modes. We show how the G-grammar can be oriented for each of the modes by reformulating the lexical expressions as rewriting rules adapted to parsing or generation, which then have strong decidability properties (inherent reversibility). We give examples showing the value of conjugacy for handling long-distance movement and quantifier scoping both in parsing and generation. The paper argues that by moving from the free monoid over a vocabulary V (standard in formal language theory) to the free group over V, deep affinities between linguistic phenomena and classical algebra come to the surface, and that the consequences of tapping the mathematical connections thus established can be considerable. (shrink)
Some recent studies in computationallinguistics have aimed to take advantage of various cues presented by punctuation marks. This short survey is intended to summarise these research efforts and additionally, to outline a current perspective for the usage and functions of punctuation marks. We conclude by presenting an information-based framework for punctuation, influenced by treatments of several related phenomena in computationallinguistics.
Narrative passages told from a character's perspective convey the character's thoughts and perceptions. We present a discourse process that recognizes characters' thoughts and perceptions in third-person narrative. An effect of perspective on reference In narrative is addressed: references in passages told from the perspective of a character reflect the character's beliefs. An algorithm that uses the results of our discourse process to understand references with respect to an appropriate set of beliefs is presented.
The recent trend in cognitive robotics experiments on language learning, symbol grounding, and related issues necessarily entails a reduction of sensorimotor aspects from those provided by a human body to those that can be realized in machines, limiting robotic models of symbol grounding in this respect. Here, we argue that there is a need for modeling work in this domain to explicitly take into account the richer human embodiment even for concrete concepts that prima facie relate merely to simple actions, (...) and illustrate this using distributional methods from computationallinguistics which allow us to investigate grounding of concepts based on their actual usage. We also argue that these techniques have applications in theories and models of grounding, particularly in machine implementations thereof. Similarly, considering the grounding of concepts in human terms may be of benefit to future work in computationallinguistics, in particular in going beyond “grounding” concepts in the textual modality alone. Overall, we highlight the overall potential for a mutually beneficial relationship between the two fields. (shrink)
Human rights discourse has been likened to a global lingua franca, and in more ways than one, the analogy seems apt. Human rights discourse is a language that is used by all yet belongs uniquely to no particular place. It crosses not only the borders between nation-states, but also the divide between national law and international law: it appears in national constitutions and international treaties alike. But is it possible to conceive of human rights as a global language or lingua (...) franca not just in a figurative or metaphorical sense, but in a literal or linguistic sense as a legal dialect defined by distinctive patterns of word choice and usage? Does there exist a global language of human rights that transcends not only national borders, but also the divide between domestic and international law? Empirical analysis suggests that the answer is yes, but this global language comes in at least two variants or dialects. New techniques for performing automated content analysis enable us to analyze the bulk of all national constitutions over the last two centuries, together with the world’s leading regional and international human rights instruments, for patterns of linguistic similarity and to evaluate how much language, if any, they share in common. Specifically, we employ a technique known as topic modeling that disassembles texts into recurring verbal patterns. The results highlight the existence of two species or dialects of rights talk—the universalist dialect and the positive-rights dialect—both of which are global in reach and rising in popularity. The universalist dialect is generic in content and draws heavily on the type of language found in international and regional human rights instruments. It appears in particularly large doses in the constitutions of transitional states, developing states, and states that have been heavily exposed to the influence of the international community. The positive-rights dialect, by contrast, is characterized by its substantive emphasis on positive rights of a social or economic variety, and by its prevalence in lengthier constitutions and constitutions from outside the common law world, especially those of the Spanish-speaking world. Both dialects of rights talk are truly transnational, in the sense that they appear simultaneously in national, regional, and international legal instruments and transcend the distinction between domestic and international law. Their existence attests to the blurring of the boundary between constitutional law and international law. (shrink)
The article presents proofs of the context freeness of a family of typelogical grammars, namely all grammars that are based on a uni- ormultimodal logic of pure residuation, possibly enriched with thestructural rules of Permutation and Expansion for binary modes.
We combine state-of-the-art techniques from computational linguisticsand theorem proving to build an engine for playing text adventures,computer games with which the player interacts purely through naturallanguage. The system employs a parser for dependency grammar and ageneration system based on TAG, and has components for resolving andgenerating referring expressions. Most of these modules make heavy useof inferences offered by a modern theorem prover for descriptionlogic. Our game engine solves some problems inherent in classical textadventures, and is an interesting test case (...) for the interactionbetween natural language processing and inference. (shrink)
: The open-ended character of natural languages calls for the hypothesis that humans are endowed with a recursive procedure generating sentences which are hierarchically organized. Structural relations such as c-command, expressed on hierarchical sentential representations, determine all sorts of formal and interpretive properties of sentences. The relevant computational principles are well beyond the reach of conscious introspection, so that studying such properties requires the formulation of precise formal hypotheses, and empirically testing them. This article illustrates all these aspects of (...) linguistic research through the discussion of non-coreference effects. The article argues in favor of the formal linguistic approach based on hierarchical structures, and against alternatives based on vague notions of “analogical generalization”, and/or exploiting mere linear order. In the final part, the issue of cross-linguistic invariance and variation of non-coreference effects is addressed. Keywords: Linguistic Knowledge; Morphosyntactic Properties; Unconscious Computations; Coreference; Linguistic Representations Conoscenza linguistica e computazioni inconsce Riassunto: Il carattere aperto del linguaggio naturale avvalora l’ipotesi che gli esseri umani siano dotati di una procedura ricorsiva che genera frasi gerarchicamente organizzate. Relazioni strutturali come il c-comando, espresse su rappresentazioni frasali gerarchiche, determinano tutte le proprietà formali e interpretative delle frasi. I principi computazionali rilevanti sono totalmente al di fuori della portata della coscienza introspettiva e così lo studio di tali proprietà richiede la formulazione di precise ipotesi formali e la loro verifica sperimentale. Questo articolo illustra tutti questi aspetti della ricerca linguistica, esaminando gli effetti di non-coreferenza. Si argomenta in favore dell’approccio linguistico formale basato su strutture gerarchiche e contro alternative basate su vaghe nozioni di “generalizzazione analogica” e/o che impiegano il semplice ordine lineare. Nella parte finale si affronta il tema dell’invarianza e della variazione cross-linguistica degli effetti di non-coreferenza. Parole chiave: Conoscenza linguistica; Proprietà morfosintattiche; Computazioni inconsce; Coreferenzialità; Rappresentazioni linguistiche. (shrink)
The rejection of behaviorism in the 1950s and 1960s led to the view, due mainly to Noam Chomsky, that language must be studied by looking at the mind and not just at behavior. It is an understatement to say that Chomskyan linguistics dominates the field. Despite being the overwhelming majority view, it has not gone unchallenged, and the challenges have focused on different aspects of the theory. What is almost universally accepted, however, is Chomsky’s view that understanding language demands (...) a theory that posits mental states that represent rules of language. Call this claim, following Cowie (1999), Representationalism or (R). According to (R), ‘‘[e]xplaining language mastery and acquisition requires the postulation of contentful mental states and processes involving their manipulation’’ (Cowie, 1999, p. 154). Although (R) is nothing more than the general assumption on which cognitive psychology is founded applied to the case of language, even it has had its detractors. Critics have argued that linguistic competence should not in fact be thought of as based on the possession of a body of linguistic knowledge but should be thought of, rather, as a kind of skill. This is an important challenge because one might be inclined to think that no recognizable form of Chomskyan linguistics could withstand the falsification of (R). In this paper we attempt to show that in fact (R) could be false without doing much damage to Chomskyan linguistics at all. Indeed, it is possible that the Chomskyan position could be made more coherent by adopting the view we will sketch. Our claim, therefore, is that critics of (R) might be right, but that this does not obviously make them serious critics of the Chomskyan program. (shrink)
Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we propose two information-theoretic measures—ambiguity and distinctiveness—derived from a simple model of sentence processing. We test these measures on a set of puns and regular sentences and show that they correlate significantly with (...) human judgments of funniness. Moreover, within a set of puns, the distinctiveness measure distinguishes exceptionally funny puns from mediocre ones. Our work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine-grained level. We present it as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena. (shrink)
This article uses a 36-million word corpus of news reporting on Hurricane Katrina in the United States to explore how computer-based methods can help researchers to investigate the construction of newsworthiness. It makes use of Bednarek and Caple’s discursive approach to the analysis of news values, and is both exploratory and evaluative in nature. One aim is to test and evaluate the integration of corpus techniques in applying discursive news values analysis. We employ and evaluate corpus techniques that have not (...) been tested previously in relation to the large-scale analysis of news values. These techniques include tagged lemma frequencies, collocation, key part-of-speech tags and key semantic tags. A secondary aim is to gain insights into how a specific happening – Hurricane Katrina – was linguistically constructed as newsworthy in major American news media outlets, thus also making a contribution to ecolinguistics. (shrink)
This paper presents a study of the effect of working memory load on the interpretation of pronouns in different discourse contexts: stories with and without a topic shift. We discuss a computational model (in ACT-R, Anderson, 2007) to explain how referring expressions are acquired and used. On the basis of simulations of this model, it is predicted that WM constraints only affect adults' pronoun resolution in stories with a topic shift, but not in stories without a topic shift. This (...) latter prediction was tested in an experiment. The results of this experiment confirm that WM load reduces adults' sensitivity to discourse cues signaling a topic shift, thus influencing their interpretation of subsequent pronouns. (shrink)
This paper presents a study of the effect of working memory load on the interpretation of pronouns in different discourse contexts: stories with and without a topic shift. We discuss a computational model (in ACT‐R, Anderson, 2007) to explain how referring expressions are acquired and used. On the basis of simulations of this model, it is predicted that WM constraints only affect adults' pronoun resolution in stories with a topic shift, but not in stories without a topic shift. This (...) latter prediction was tested in an experiment. The results of this experiment confirm that WM load reduces adults' sensitivity to discourse cues signaling a topic shift, thus influencing their interpretation of subsequent pronouns. (shrink)
Linguistic Issues in Language Technology (LiLT) is an open-access journal that focuses on the relationships between linguistic insights and language technology. In conjunction with machine learning and statistical techniques, deeper and more sophisticated models of language and speech are needed to make significant progress in both existing and newly emerging areas of computational language analysis. The vast quantity of electronically accessible natural language data (text and speech, annotated and unannotated, formal and informal) provides unprecedented opportunities for data-intensive analysis of (...) linguistic phenomena, which can in turn enrich computational methods. Taking an eclectic view on methodology, LiLT provides a forum for this work. In this volume, contributors offer new perspectives on semantic representations for textual inference. (shrink)
Lexical semantics has become a major research area within computationallinguistics, drawing from psycholinguistics, knowledge representation, computer algorithms and architecture. Research programmes whose goal is the definition of large lexicons are asking what the appropriate representation structure is for different facets of lexical information. Among these facets, semantic information is probably the most complex and the least explored.Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerised lexicons (...) for the automatic treatment of natural language, with applications to machine translation, automatic indexing, and database front-ends, knowledge extraction, among other things. It focuses on semantic issues, as seen by linguists, psychologists, and computer scientists. Besides describing academic research, it also covers ongoing industrial projects. (shrink)
The Cambridge Handbook of Computational Cognitive Sciences is a comprehensive reference for this rapidly developing and highly interdisciplinary field. Written with both newcomers and experts in mind, it provides an accessible introduction of paradigms, methodologies, approaches, and models, with ample detail and illustrated by examples. It should appeal to researchers and students working within the computational cognitive sciences, as well as those working in adjacent fields including philosophy, psychology, linguistics, anthropology, education, neuroscience, artificial intelligence, computer science, and (...) more. (shrink)
This book deals with a major problem in the study of language: the problem of reference. The ease with which we refer to things in conversation is deceptive. Upon closer scrutiny, it turns out that we hardly ever tell each other explicitly what object we mean, although we expect our interlocutor to discern it. Amichai Kronfeld provides an answer to two questions associated with this: how do we successfully refer, and how can a computer be programmed to achieve this? Beginning (...) with the major theories of reference, Dr Kronfeld provides a consistent philosophical view which is a synthesis of Frege's and Russell's semantic insights with Grice's and Searle's pragmatic theories. This leads to a set of guiding principles, which are then applied to a computational model of referring. The discussion is made accessible to readers from a number of backgrounds: in particular, students and researchers in the areas of computationallinguistics, artificial intelligence and the philosophy of language will want to read this book. (shrink)
Written by world-leading experts, this book draws together a number of important strands in contemporary approaches to the philosophical and scientific questions that emerge when dealing with the issues of computing, information, cognition and the conceptual issues that arise at their intersections. It discovers and develops the connections at the borders and in the interstices of disciplines and debates. This volume presents a range of essays that deal with the currently vigorous concerns of the philosophy of information, ontology creation and (...) control, bioinformation and biosemiotics, computational and post-computation approaches to the philosophy of cognitive science, computationallinguistics, ethics, and education. (shrink)
Sentence comprehension - the way we process and understand spoken and written language - is a central and important area of research within psycholinguistics. This book explores the contribution of computationallinguistics to the field, showing how computational models of sentence processing can help scientists in their investigation of human cognitive processes. It presents the leading computational model of retrieval processes in sentence processing, the Lewis and Vasishth cue-based retrieval mode, and develops a principled methodology for (...) parameter estimation and model comparison/evaluation using benchmark data, to enable researchers to test their own models of retrieval against the present model. It also provides readers with an overview of the last 20 years of research on the topic of retrieval processes in sentence comprehension, along with source code that allows researchers to extend the model and carry out new research. Comprehensive in its scope, this book is essential reading for researchers in cognitive science. (shrink)
This volume brings together a collection of papers covering a wide range of topics in computer and cognitive science. Topics included are: the foundational relevance of logic to computer science, with particular reference to tense logic, constructive logic, and Horn clause logic; logic as the theoretical underpinnings of the engineering discipline of expert systems; a discussion of the evolution of computationallinguistics into functionally distinct task levels; and current issues in the implementation of speech act theory.
This book provides a sustained and penetrating critique of a wide range of views in modern cognitive science and philosophy of the mind, from Turing's famous test for intelligence in machines to recent work in computational linguistic theory. While discussing many of the key arguments and topics, the authors also develop a distinctive analytic approach. Drawing on the methods of conceptual analysis first elaborated by Wittgenstein and Ryle, the authors seek to show that these methods still have a great (...) deal to offer in the field of the cognitive theory and the philosophy of mind, providing a powerful alternative to many of the positions put forward in the contemporary literature. Amoung the many issues discussed in the book are the following: the Cartesian roots of modern conceptions of mind; Searle's 'Chinese Room' thought experiment; Fodor's 'language of thought' hypothesis; the place of 'folk psychology' in cognitivist thought; and the question of whether any machine may be said to 'think' or 'understand' in the ordinary senses of these words. Wide ranging, up-to-date and forcefully argued, this book represents a major intervention in contemporary debates about the status of cognitive science an the nature of mind. It will be of particular interest to students and scholars in philosophy, psychology, linguistics and computing sciences. (shrink)
We compare our model of unsupervised learning of linguistic structures, ADIOS [1, 2, 3], to some recent work in computationallinguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent afﬁnity with Mildly Context Sensitive Languages). The representations learned (...) by our algorithm are truly emergent from the (unannotated) corpus data, whereas those found in published works on cognitive and construction grammars and on TAGs are hand-tailored. Thus, our results complement and extend both the computational and the more linguistically oriented research into language acquisition. We conclude by suggesting how empirical and formal study of language can be best integrated. (shrink)
Modelling with Words is an emerging modelling methodology closely related to the paradigm of Computing with Words introduced by Lotfi Zadeh. This book is an authoritative collection of key contributions to the new concept of Modelling with Words. A wide range of issues in systems modelling and analysis is presented, extending from conceptual graphs and fuzzy quantifiers to humanist computing and self-organizing maps. Among the core issues investigated are - balancing predictive accuracy and high level transparency in learning - scaling (...) linguistic algorithms to high-dimensional data problems - integrating linguistic expert knowledge with knowledge derived from data - identifying sound and useful inference rules - integrating fuzzy and probabilistic uncertainty in data modelling. (shrink)
The notions of argument and argumentation have become increasingly ubiquitous in Artificial Intelligence research, with various application and interpretations. Less attention has been, however, specifically devoted to rhetorical argument The work presented in this paper aims at bridging this gap, by proposing a framework for characterising rhetorical argumentation, based on Perelman and Olbrechts-Tyteca's New Rhetoric. The paper provides an overview of the state of the art of computational work based on, or dealing with, rhetorical aspects of argumentation, before presenting (...) the characterisation proposed, corroborated by walked-through examples. (shrink)