Abstract The purpose of this paper is twofold: (i) we will argue that formal semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts, namely ontological concepts, that should be types in a strongly-typed ontology, and logical concepts, that are predicates corresponding to properties of, and relations between, objects of various ontological types; and (ii) we show that accounting for these differences amounts to a new formal semantics; one that integrates lexical and (...) compositional semantics in one coherent framework and one where formal semantics is embedded with a strongly typed ontology; an ontology that reflects our commonsense knowledge of the world and the way we talk about it in ordinary language. We will show how in such a framework a number of challenges in the semantics of natural language are adequately and systematically treated. (shrink)
Whether it was John Searle’s Chinese Room argument (Searle, 1980) or Roger Penrose’s argument of the non-computable nature of a mathematician’s insight – an argument that was based on Gödel’s Incompleteness theorem (Penrose, 1989), we have always had skeptics that questioned the possibility of realizing strong Artificial Intelligence (AI), or what has become known by Artificial General Intelligence (AGI). But this new book by Landgrebe and Smith (henceforth, L&S) is perhaps the strongest argument ever made against strong AI. It is (...) a very extensive review of what building a mind essentially amounts to drawing on insights and results from biology, physics, linguistics, computability, philosophy, and mathematics. (shrink)
Over two decades ago a "quite revolution" overwhelmingly replaced knowledgebased approaches in natural language processing (NLP) by quantitative (e.g., statistical, corpus-based, machine learning) methods. Although it is our firm belief that purely quantitative approaches cannot be the only paradigm for NLP, dissatisfaction with purely engineering approaches to the construction of large knowledge bases for NLP are somewhat justified. In this paper we hope to demonstrate that both trends are partly misguided and that the time has come to enrich logical semantics (...) with an ontological structure that reflects our commonsense view of the world and the way we talk about in ordinary language. In this paper it will be demonstrated that assuming such an ontological structure a number of challenges in the semantics of natural language (e.g., metonymy, intensionality, copredication, nominal compounds, etc.) can be properly and uniformly addressed. (shrink)
We argue that logical semantics might have faltered due to its failure in distinguishing between two fundamentally very different types of concepts: ontological concepts, that should be types in a strongly-typed ontology, and logical concepts, that are predicates corresponding to properties of and relations between objects of various ontological types. We will then show that accounting for these differences amounts to the integration of lexical and compositional semantics in one coherent framework, and to an embedding in our logical semantics of (...) a strongly-typed ontology that reflects our commonsense view of the world and the way we talk about it in ordinary language. We will show that in such a framework a number of challenges in natural language semantics can be adequately and systematically treated. (shrink)
We argue for a compositional semantics grounded in a strongly typed ontology that reflects our commonsense view of the world and the way we talk about it. Assuming such a structure we show that the semantics of various natural language phenomena may become nearly trivial.
The purpose of this paper is twofold: (i) we argue that the structure of commonsense knowledge must be discovered, rather than invented; and (ii) we argue that natural language, which is the best known theory of our (shared) commonsense knowledge, should itself be used as a guide to discovering the structure of commonsense knowledge. In addition to suggesting a systematic method to the discovery of the structure of commonsense knowledge, the method we propose seems to also provide an explanation for (...) a number of phenomena in natural language, such as metaphor, intensionality, and the semantics of nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is no less than the systematic ‘discovery’ of a well-typed ontology of commonsense knowledge, and the subsequent formulation of the longawaited goal of a meaning algebra. (shrink)
Despite overwhelming evidence suggesting that quantifier scope is a phenomenon that must be treated at the pragmatic level, most computational treatments of scope ambiguities have thus far been a collection of syntactically motivated preference rules. This might be in part due to the prevailing wisdom that a commonsense inferencing strategy would require the storage of and reasoning with a vast amount of background knowledge. In this paper we hope to demonstrate that the challenge in developing a commonsense inferencing strategy is (...) in the discovery of the relevant commonsense data and in a proper formulation of the inferencing strategy itself, and that a massive amount of background knowledge is not always required. In particular, we present a very effective procedure for resolving quantifier scope ambiguities at the pragmatic level using simple quantitative data that is readily available in most database environments. (shrink)