The Turing Test is one of the most disputed topics in artificial intelligence, philosophy of mind, and cognitive science. This paper is a review of the past 50 years of the Turing Test. Philosophical debates, practical developments and repercussions in related disciplines are all covered. We discuss Turing's ideas in detail and present the important comments that have been made on them. Within this context, behaviorism, consciousness, the 'other minds' problem, and similar topics in philosophy of mind are discussed. We (...) also cover the sociological and psychological aspects of the Turing Test. Finally, we look at the current situation and analyze programs that have been developed with the aim of passing the Turing Test. We conclude that the Turing Test has been, and will continue to be, an influential and controversial topic. -/- (This paper was reprinted in: The Turing Test: The Elusive Standard of Artificial Intelligence, J. H. Moor, ed., Kluwer Academic, pp. 23-78, 2003.). (shrink)
The work described in this report is motivated by the desire to test the expressive possibilities of action language C+. The Causal Calculator (CCalc) is a system that answers queries about action domains described in a fragment of that language. The Zoo World and the Traffic World have been proposed by Erik Sandewall in his Logic Modelling Workshop—an environment for communicating axiomatizations of action domains of nontrivial size. -/- The Zoo World consists of several cages and the exterior, gates between (...) them, and animals of several species, including humans. Actions in this domain include moving within and between cages, opening and closing gates, and mounting and riding animals. The Traffic World includes vehicles moving continuously between road crossings subject to a number of restrictions, such as speed limits and keeping a fixed safety distance away from other vehicles on the road. We show how to represent the two domains in the input language of CCalc, and how to use CCalc to test these representations. (shrink)
The importance of contextual reasoning is emphasized by various researchers in AI. (A partial list includes John McCarthy and his group, R. V. Guha, Yoav Shoham, Giuseppe Attardi and Maria Simi, and Fausto Giunchiglia and his group.) Here, we survey the problem of formalizing context and explore what is needed for an acceptable account of this abstract notion.
At the heart of natural language processing is the understanding of context dependent meanings. This paper presents a preliminary model of formal contexts based on situation theory. It also gives a worked-out example to show the use of contexts in lifting, i.e., how propositions holding in a particular context transform when they are moved to another context. This is useful in NLP applications where preserving meaning is a desideratum.
Strawson proposed in the early seventies an attractive threefold distinction regarding how context bears on the meaning of 'what is said' when a sentence is uttered. The proposed scheme is somewhat crude and, being aware of this aspect, Strawson himself raised various points to make it more adequate. In this paper, we review the scheme of Strawson, note his concerns, and add some of our own. However, our main point is to defend the essence of Strawson's approach and to recommend (...) it as a starting point for research into intended meaning and context. (shrink)
This paper argues that in addition to the familiar approach using formal contexts, there is now a need in artificial intelligence to study contexts as social constructs. As a successful example of the latter approach, I draw attention to 'interpretation' (in the sense of literary theory), viz. the reconstruction of the intended meaning of a literary text that takes into account the context in which the author assumed the reader would place the text. An important contribution here comes from Wendell (...) Harris, enumerating the seven crucial dimensions of context: knowledge of reality, knowledge of language, and the authorial, generic, collective, specific, and textual dimensions. Finally, two recent approaches to interpretation, due to Jon Barwise and Jerry Hobbs, are analyzed as useful attempts which also come to grips with the notion of context. It must be noted that there has been a considerable body of contributions connecting linguistic structure with social context. For example, anthropological linguistics, from Bronislaw Malinowski onwards, has underlined the cultural context of discourse as essential to meaning. This viewpoint became prominent with the emergence of the ethnography of speaking in anthropology. Thus, conversation analysis represents a consistent formal effort to contribute to an analysis of the nature of context. While this paper emphasizes and reviews the literary theory approach, it makes various contacts with works of the latter kind (e.g., the landmark contributions of Erving Goffman, John Gumperz, William Hanks, John Heritage, Dell Hymes, et al.) in order to deliver a more balanced and complete study of the dimensions of context. (shrink)
Papers in this special issue were written upon invitation. They were then subjected to the usual refereeing process of the Journal of Pragmatics. While we have attempted to cover almost all important areas in which context is employed as a conceptual apparatus, our coverage is clearly limited in scope. Accordingly, instead of a general updated overview of the use of context in every conceivable specific field (let's say the state-of-the-art of interdisciplinary research on context: a colossal/impossible enterprise!), we will offer (...) the readers of this special issue an introduction to the problems involved in the study of context by way of eight papers encompassing eight different areas. These are (in alphabetical order): artificial intelligence, bilingualism, child development, cognitive science, conversation analysis, neuroscience, philosophy of language, and pragmatics. Our goal is to propose a multi-faceted view of context, and to stimulate further investigations of this fertile topic, which permeates our lives. (shrink)
In traditional linguistic accounts of context, one thinks of the immediate features of a speech situation, that is, a situation in which an expression is uttered. Thus, features such as time, location, speaker, hearer and preceding discourse are all parts of context. But context is a wider and more transcendental notion than what these accounts imply. For one thing, context is a relational concept relating social actions and their surroundings, relating social actions, relating individual actors and their surroundings, and relating (...) the set of individual actors and their social actions to their surroundings. (shrink)
Situated semantics can be regarded as an attempt at placing situational context (context of situation) at the center of all discussions of meaning. Situation theory is a theory of information content that takes context very seriously. Individuals, properties, relations, and spatiotemporal locations are basic constructs of situation theory. Individuals are conceived as invariants; having properties and standing in relations, they tend to persist in time and space. An anchoring function binds the location parameters to appropriate objects present in the grounding (...) situation. Anchoring plays a major role in the working of constraints that include nomic constraints, conventional constraints and conditional constraints. Situation semantics develops a theory of meaning that is based on relations between situations. Situation semantics provides a fundamental framework for realistic semantics. The ideas emerging from research into situation semantics have been combined with linguistic work and have led to numerous useful proposals. (shrink)
An information-based approach to natural language semantics. Formulated by Jon Barwise and John Perry in their influential book Situations and Attitudes (1983), it is built upon the notion of a 'situation' --- a limited part of the real world that a cognitive agent can individuate and has access to. A situation represents a lump of information in terms of a collection of facts. It is through the actualist ontology of situations that the meaning of natural language utterances can be elucidated.
Bilkent Üniversitesi'nde bu yıl  açılan Felsefe Bölümü’nün başkanlığını yürüten ve ana ilgi alanı yapay zekâ olan Prof. Dr. Varol Akman, yapay zekâyla felsefenin ilişkisini ve Felsefe Bölümü'nün özelliklerini anlattı. (An interview published in Bilkent Magazine about the then new Bilkent Philosophy Department.).
In a paper published in 1992, Dennis Kurzon shows that silence does not necessarily mean lack of power: the silent response to a question may well be aiming at gaining control of a situation, viz. exercising power. I would like to extend Kurzon's analysis and argue that at times silence may mean derision or ridicule.
The papers that make up this special issue do not take idealized abstractions of context as their point of departure but rather start with the actual phenomena under study and later generalize. We agree that, more often than not, giving a formal model and providing a theory of a loaded notion – such as context – can lead to important insights. Thus, precise models of context and accompanying theories are useful. However, given the widely different fields, methodologies and worldviews within (...) which people study, an approach that starts with the root phenomena/problems and only attempts normalization and generalization post hoc might be a more productive way to proceed. (shrink)
Planlama --- bir amaca ulaşmak üzere bir aksiyonlar bütünü tasarlamak --- yapay zekadaki en temel problemlerden biridir. Bu yazıda, robotikte planlama konusuna mantıkçı (logicist) yaklaşım ele alınmaktadır. [Planning --- devising a plan of action to reach a given goal --- is a fundamental problem in AI. This paper reviews the logicist approach to planning in robotics.].
Commonsense reasoning about the physical world, as exemplified by "Iron sinks in water" or "If a ball is dropped it gains speed," will be indispensable in future programs. We argue that to make such predictions (namely, envisioning), programs should use abstract entities (such as the gravitational field), principles (such as the principle of superposition), and laws (such as the conservation of energy) of physics for representation and reasoning. These arguments are in accord with a recent study in physics instruction where (...) expert problem solving is related to the construction of physical representations that contain fictitious, imagined entities such as forces and momenta (Larkin 1983). We give several examples showing the power of physical representations. (shrink)
Context and the indexical 'I'.Varol Akman - 2002 - 1st North American Summer School in Logic, Language, and Information (NASSLLI) Workshop on Cognition: Formal Models and Experimental Results, John Perry (Organizer), CSLI, Stanford, CA.details
John Perry argued that the clearest case of an indexical that relies only on the narrow context is 'I,' whose designation depends on the agent and nothing else. In this presentation, I give some examples which show that this view, while essentially correct, may have problems in some rare divergent cases.
We extend causal theories and study actions in domains involving multiple agents. Causal theories, invented by Yoav Shoham, are based on a temporal nonmonotonic logic and have computationally tractable aspects. Since Shoham's formalism does not provide an adequate mechanism for representing simultaneous actions and specifying their consequences, we introduce the notion of counteractions while preserving the efficiency and model-theoretic properties of causal theories.
Formalizing commonsense knowledge for reasoning about time has long been a central issue in AI. It has been recognized that the existing formalisms do not provide satisfactory solutions to some fundamental problems, viz. the frame problem. Moreover, it has turned out that the inferences drawn do not always coincide with those one had intended when one wrote the axioms. These issues call for a well-defined formalism and useful computational utilities for reasoning about time and change. Yoav Shoham of Stanford University (...) introduced in his 1986 Yale doctoral thesis an appealing temporal nonmonotonic logic and identified a class of theories, causal theories, which have computationally simple model-theoretic properties. This paper is a study towards building upon Shoham's work on causal theories. We concentrate on improving computational aspects of causal theories while preserving their model-theoretic properties. (shrink)
Analogy-making is finding analogies between different situations. In this paper, we provide a new model of computational analogy-making which uses Situation Theory as its formal background. Situation Theory is a semantic and logical theory which provides a naturalistic way to represent relations in situations. The system described in this paper is aimed at solving analogy problems made by basic geometric figures in a chessboard-like environment.
This is a review of Survey of the State of the Art in Human Language Technology, [editorial board: Ronald Cole (editor-in-chief), Joseph Mariani, Hans Uszkoreit, Annie Zaenen, Victor Zue], Cambridge University Press (Studies in Natural Language Processing) and Giardini Editori e Stampatori in Pisa (Linguistica Computazionale, volumes XII-XII1), managing editors: Giovanni Battista Varile and Antonio Zampolli, Cambridge University Press, 1997.
Future robots should have common sense about the world in order to handle the problems they will encounter. A large part of this commonsense knowledge must be naive physics knowledge, since carrying out even the simplest everyday chores requires familiarity with physics laws. But how should one start codifying this knowledge? What kind of skills should be elicited from the experts (each and every one of us)? This paper will attempt to provide some hints by studying the mental models of (...) force and motion. (shrink)
Anaphora resolution is one of the most active research areas in natural language processing. This study examines focusing as a tool for the resolution of pronouns which are a kind of anaphora. Focusing is a discourse phenomenon like anaphora. Candy Sidner formalized focusing in her 1979 MIT PhD thesis and devised several algorithms to resolve definite anaphora including pronouns. She presented her theory in a computational framework but did not generally implement the algorithms. Her algorithms related to focusing and pronoun (...) resolution are implemented in this paper. This implementation provides a better comprehension of the theory both from a conceptual and a computational point of view. The resulting program is tested on different discourse segments, and evaluation and analysis of the experiments are presented together with the statistical results. (shrink)
This is a review of Gul A. Agha’s Actors: A Model of Concurrent Computation in Distributed Systems (The MIT Press, Cambridge, MA, 1987), a part of the MIT Press Series in Artificial Intelligence, edited by Patrick Winston, Michael Brady, and Daniel Bobrow.
In Strawson’s Entity and Identity, there are two essays (Chapters 11 and 12), which study the notion of context. In these essays, Strawson advances a threefold distinction regarding how context bears on the meaning of 'what is said' when a sentence is uttered. -/- In this paper, we'll (i) review the original scheme of Strawson and summarize his improvements to his own scheme, and (ii) add our own improvements to make it even more thoroughgoing. We'll also show that unless it (...) is elaborated with several considerations (mostly based on our work regarding context as a social construct and contextualizing as a form of social action) it cannot function as a realistic initiative towards building common sense models of how intended meaning is achieved. (shrink)
Having been influenced by John Perry's 1997 article, "Indexicals and Demonstratives," in this paper I take a closer look at contexts for indexicals, more specifically the indexical "I." (N.B. The adjective in the title is not misspelt; it is used in the sense of the leading brand of premium vodka.).
We describe an experimental approach toward implementing a commonsense "microtheory" for buying and selling. Our prototype system characterizes how intelligent agents hold items and money, how they buy and sell items, and the way money and items are transferred. The ontology of the system includes money (cash, check, credit card), agents (people, organizations), items (movable, real estate, service), barter, and the notions of transfer, loan, buying by installments, profit, and loss.
A logical formalization of emotions is considered to be tricky because they appear to have no strict types, reasons, and consequences. On the other hand, such a formalization is crucial for commonsense reasoning. Here, the so-called "object directedness" of emotions is studied by using Helen Nissenbaum's influential ideas.
Ken Forbus's Qualitative Process Theory (QPT) is a popular theory for reasoning about the physical aspects of the daily world. Qualitative Process Theory Using Linguistic Variables by Bruce D'Ambrosio (Springer-Verlag, New York, 1989) is an attempt to fill some gaps in QPT.
The papers you will find in this special issue of JoLLI develop letter and spirit of Turing’s original contributions. They do not lazily fall back into the same old sofa, but follow – or question – the inspiring ideas of a great man in the search for new, more precise, conclusions. It is refreshing to know that the fertile landscape created by Alan Turing remains a source of novel ideas.
The success of set theory as a foundation for mathematics inspires its use in artificial intelligence, particularly in commonsense reasoning. In this survey, we briefly review classical set theory from an AI perspective, and then consider alternative set theories. Desirable properties of a possible commonsense set theory are investigated, treating different aspects like cumulative hierarchy, self-reference, cardinality, etc. Assorted examples from the ground-breaking research on the subject are also given.
Review of "Artificial Intelligence: An MIT Perspective, Volume 1: Expert Problem Solving, Natural Language Understanding, Intelligent Computer Coaches, Representation and Learning," Patrick Henry Winston & Richard Henry Brown (eds.), The MIT Press, Cambridge, MA, 1979.
The merits of set theory as a foundational tool in mathematics stimulate its use in various areas of artificial intelligence, in particular intelligent information systems. In this paper, a study of various nonstandard treatments of set theory from this perspective is offered. Applications of these alternative set theories to information or knowledge management are surveyed.
Review of "Artificial Intelligence: An MIT Perspective, Volume 2: Understanding Vision, Manipulation, Computer Design, Symbol Manipulation," Patrick Henry Winston & Richard Henry Brown (eds.), The MIT Press, Cambridge, MA, 2nd printing, 1980.
This is a brief reply to Herbert A. Simon's fine paper "Literary Criticism: A Cognitive Approach," Stanford Humanities Review, Special Supplement (Bridging the Gap: Where Cognitive Science Meets Literary Criticism), vol. 4, no. 1, pp. 1-26, Spring 1994.
Sir John Lyons’s Linguistic Semantics: An Introduction (Cambridge, UK: Cambridge University Press, 1995) is a tolerable addition to the list of half a dozen or so impressive titles he has produced on linguistic subjects over the years. This book was initially planned to be a second edition of his Language, Meaning and Context (Lyons, 1981). However, in the end it turned out to be a successor and replacement. For it is, in the author’s words, a very different book compared to (...) the 1981 volume: it is much longer, treats topics missing in the earlier volume, and is written in a different style. (shrink)
This paper investigates an alternative set theory (due to Peter Aczel) called Hyperset Theory. Aczel uses a graphical representation for sets and thereby allows the representation of non-well-founded sets. A program, called HYPERSOLVER, which can solve systems of equations defined in terms of sets in the universe of this new theory is presented. This may be a useful tool for commonsense reasoning.
In this special issue of Minds and Machines ("Situations and Artificial Intelligence") we take a close look at recent situation-theoretic research which has mostly originated within a philosophical framework but promises to have strong connotations for Artificial Intelligence workers. The seven papers which make up this special issue (three of the papers appear in Minds and Machines 9(1)) demonstrate the advantages of the situation-based approach towards problems with a definite AI flavor.
P.F. Strawson proposed in the early seventies a threefold distinction regarding how context bears on the meaning of "what is said" when a sentence is uttered. The proposal was somewhat tentative and, being aware of this aspect, Strawson himself raised various questions to make it more adequate. In this paper, we review Strawson's scheme, note his concerns, and add some of our own. We also defend its essence and recommend it as an insightful entry point re the interplay of intended (...) meaning and context. (shrink)
"Language has never been accessible to me in the way that it was for Sachs. I'm shut off from my own thoughts, trapped in a no-man's-land between feeling and articulation, and no matter how hard I try to express myself, I can rarely come up with more than a confused stammer. Sachs never had any of these difficulties. Words and things matched up for him, whereas for me they are constantly breaking apart, flying off in a hundred different directions. I (...) spend most of my time picking up the pieces and gluing them back together, but Sachs never had to stumble around like that, hunting through garbage dumps and trash bins, wondering if he hadn't fit the wrong pieces next to each other. His uncertainties were of a different order, but no matter how hard life became for him in other ways, words were never his problem." -Paul Auster, Leviathan. (shrink)
Deflationism, one of the influential philosophical doctrines of truth, holds that there is no property of truth, and that overt uses of the predicate "true" are redundant. However, the hypothetical examples used by theorists to exemplify deflationism are isolated sentences, offering little to examine what the predicate adds to meaning within context. We oppose the theory not on philosophical but on empirical grounds. We collect 7,610 occurrences of "it is true that" from 10 influential periodicals published in the United States. (...) We classify and annotate these with respect to the positions of coordinating and subordinating conjunctions that they contain. This way we investigate the contextual relationships between the proposition following "it is true that" with its surroundings. Overall, 34 different syntactical patterns are encountered. In some occurrences of "true", the predicate acts in the same manner as a performative verb does. These occurrences, having been observed in linguistically reliable media, constitute pragmatic counter-examples to deflationism. (shrink)