There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the symbol grounding problem : How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their shapes, be grounded (...) in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: iconic representations, which are analogs of the proximal sensory projections of distal objects and events, and categorical representations, which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their categorical representations. Higher-order symbolic representations, grounded in these elementary symbols, consist of symbol strings describing category membership relations. Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic module, however; the symbolic functions would emerge as an intrinsically dedicated symbol system as a consequence of the bottom-up grounding of categories ' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded. (shrink)
Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what is (...) "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body. (shrink)
A provisional model is presented in which categorical perception (CP) provides our basic or elementary categories. In acquiring a category we learn to label or identify positive and negative instances from a sample of confusable alternatives. Two kinds of internal representation are built up in this learning by "acquaintance": (1) an iconic representation that subserves our similarity judgments and (2) an analog/digital feature-filter that picks out the invariant information allowing us to categorize the instances correctly. This second, categorical representation is (...) associated with the category name. Category names then serve as the atomic symbols for a third representational system, the (3) symbolic representations that underlie language and that make it possible for us to learn by "description." Connectionism is one possible mechainsm for learning the sensory invariants underlying categorization and naming. Among the implications of the model are (a) the "cognitive identity of (current) indiscriminables": Categories and their representations can only be provisional and approximate, relative to the alternatives encountered to date, rather than "exact." There is also (b) no such thing as an absolute "feature," only those features that are invariant within a particular context of confusable alternatives. Contrary to prevailing "prototype" views, however, (c) such provisionally invariant features must underlie successful categorization, and must be "sufficient" (at least in the "satisficing" sense) to subserve reliable performance with all-or-none, bounded categories, as in CP. Finally, the model brings out some basic limitations of the "symbol-manipulative" approach to modeling cognition, showing how (d) symbol meanings must be functionally grounded in nonsymbolic, "shape-preserving" representations -- iconic and categorical ones. Otherwise, all symbol interpretations are ungrounded and indeterminate. This amounts to a principled call for a psychophysical (rather than a neural) "bottom-up" approach to cognition. (shrink)
Connectionism and computationalism are currently vying for hegemony in cognitive modeling. At first glance the opposition seems incoherent, because connectionism is itself computational, but the form of computationalism that has been the prime candidate for encoding the "language of thought" has been symbolic computationalism (Dietrich 1990, Fodor 1975, Harnad 1990c; Newell 1980; Pylyshyn 1984), whereas connectionism is nonsymbolic (Fodor & Pylyshyn 1988, or, as some have hopefully dubbed it, "subsymbolic" Smolensky 1988). This paper will examine what is and is not (...) a symbol system. A hybrid nonsymbolic/symbolic system will be sketched in which the meanings of the symbols are grounded bottom-up in the system's capacity to discriminate and identify the objects they refer to. Neural nets are one possible mechanism for learning the invariants in the analog sensory projection on which successful categorization is based. "Categorical perception" (Harnad 1987a), in which similarity space is "warped" in the service of categorization, turns out to be exhibited by both people and nets, and may mediate the constraints exerted by the analog world of objects on the formal world of symbols. (shrink)
There are many entry points into the problem of categorization. Two particularly important ones are the so-called top-down and bottom-up approaches. Top-down approaches such as artificial intelligence begin with the symbolic names and descriptions for some categories already given; computer programs are written to manipulate the symbols. Cognitive modeling involves the further assumption that such symbol-interactions resemble the way our brains do categorization. An explicit expectation of the top-down approach is that it will eventually join with the bottom-up approach, which (...) tries to model how the hardware of the brain works: sensory systems, motor systems and neural activity in general. The assumption is that the symbolic cognitive functions will be implemented in brain function and linked to the sense organs and the organs of movement in roughly the way a program is implemented in a computer, with its links to peripheral devices such as transducers and effectors. (shrink)
Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic model of the mind. Nonsymbolic modeling turns (...) out to be immune to the Chinese Room Argument. The issues discussed include the Total Turing Test, modularity, neural modeling, robotics, causality and the symbol-grounding problem. (shrink)
Searle's celebrated Chinese Room Argument has shaken the foundations of Artificial Intelligence. Many refutations have been attempted, but none seem convincing. This paper is an attempt to sort out explicitly the assumptions and the logical, methodological and empirical points of disagreement. Searle is shown to have underestimated some features of computer modeling, but the heart of the issue turns out to be an empirical question about the scope and limits of the purely symbolic (computational) model of the mind. Nonsymbolic modeling (...) turns out to be immune to the Chinese Room Argument. The issues discussed include the Total Turing Test, modularity, neural modeling, robotics, causality and the symbol-grounding problem. (shrink)
2. Invariant Sensorimotor Features ("Affordances"). To say this is not to declare oneself a Gibsonian, whatever that means. It is merely to point out that what a sensorimotor system can do is determined by what can be extracted from its motor interactions with its sensory input. If you lack sonar sensors, then your sensorimotor system cannot do what a bat's can do, at least not without the help of instruments. Light stimulation affords color vision for those of us with the (...) right sensory apparatus, but not for those of us who are color-blind. The geometric fact that, when we move, the "shadows" cast on our retina by nearby objects move faster than the shadows of further objects means that, for those of us with normal vision, our visual input affords depth perception. From more complicated facts of projective and solid geometry it follows that a 3-dimensional shape, such as, say, a boomerang, can be recognized as being the same shape Ð and the same size Ð even though the size and shape of its shadow on our retinas changes as we move in relation to it or it moves in relation to us. Its shape is said to be invariant under these sensorimotor transformations, and our visual systems can detect and extract that invariance, and translate it into a visual constancy. So we keep seeing a boomerang of the same shape and size even though the shape and size of its retinal shadows keep changing. (shrink)
Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discernible respect (T5). This is a "reverse-engineering" hierarchy of (decreasing) empirical underdetermination of the theory (...) by the data. Level t1 is clearly too underdetermined, T2 is vulnerable to a counterexample (Searle's Chinese Room Argument), and T4 and T5 are arbitrarily overdetermined. Hence T3 is the appropriate target level for cognitive science. When it is reached, however, there will still remain more unanswerable questions than when Physics reaches its Grand Unified Theory of Everything (GUTE), because of the mind/body problem and the other-minds problem, both of which are inherent in this empirical domain, even though Turing hardly mentions them. (shrink)
A robot that is functionally indistinguishable from us may or may not be a mindless Zombie. There will never be any way to know, yet its functional principles will be as close as we can ever get to explaining the mind.
It is important to understand that the Turing Test is not, nor was it intended to be, a trick; how well one can fool someone is not a measure of scientific progress. The TT is an empirical criterion: It sets AI's empirical goal to be to generate human-scale performance capacity. This goal will be met when the candidate's performance is totally indistinguishable from a human's. Until then, the TT simply represents what it is that AI must endeavor eventually to accomplish (...) scientifically. (shrink)
Computation is interpretable symbol manipulation. Symbols are objects that are manipulated on the basis of rules operating only on theirshapes, which are arbitrary in relation to what they can be interpreted as meaning. Even if one accepts the Church/Turing Thesis that computation is unique, universal and very near omnipotent, not everything is a computer, because not everything can be given a systematic interpretation; and certainly everything can''t be givenevery systematic interpretation. But even after computers and computation have been successfully distinguished (...) from other kinds of things, mental states will not just be the implementations of the right symbol systems, because of the symbol grounding problem: The interpretation of a symbol system is not intrinsic to the system; it is projected onto it by the interpreter. This is not true of our thoughts. We must accordingly be more than just computers. My guess is that the meanings of our symbols are grounded in the substrate of our robotic capacity to interact with that real world of objects, events and states of affairs that our symbols are systematically interpretable as being about. (shrink)
When certain formal symbol systems (e.g., computer programs) are implemented as dynamic physical symbol systems (e.g., when they are run on a computer) their activity can be interpreted at higher levels (e.g., binary code can be interpreted as LISP, LISP code can be interpreted as English, and English can be interpreted as a meaningful conversation). These higher levels of interpretability are called "virtual" systems. If such a virtual system is interpretable as if it had a mind, is such a "virtual (...) mind" real? This is the question addressed in this "virtual" symposium, originally conducted electronically among four cognitive scientists: Donald Perlis, a computer scientist, argues that according to the computationalist thesis, virtual minds are real and hence Searle's Chinese Room Argument fails, because if Searle memorized and executed a program that could pass the Turing Test in Chinese he would have a second, virtual, Chinese-understanding mind of which he was unaware (as in multiple personality). Stevan Harnad, a psychologist, argues that Searle's Argument is valid, virtual minds are just hermeneutic overinterpretations, and symbols must be grounded in the real world of objects, not just the virtual world of interpretations. Computer scientist Patrick Hayes argues that Searle's Argument fails, but because Searle does not really implement the program: A real implementation must not be homuncular but mindless and mechanical, like a computer. Only then can it give rise to a mind at the virtual level. Philosopher Ned Block suggests that there is no reason a mindful implementation would not be a real one. (shrink)
A "machine" is any causal physical system, hence we are machines, hence machines can be conscious. The question is: which kinds of machines can be conscious? Chances are that robots that can pass the Turing Test -- completely indistinguishable from us in their behavioral capacities -- can be conscious (i.e. feel), but we can never be sure (because of the "other-minds" problem). And we can never know HOW they have minds, because of the "mind/body" problem. We can only know how (...) they pass the Turing Test, but not how, why or whether that makes them feel. (shrink)
Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing (“know-how”) This is called the Turing Test. It cannot test whether a process can generate feeling, (...) hence thinking -- only whether it can generate doing. The processes that generate thinking and know-how are “distributed” within the heads of thinkers, but not across thinkers’ heads. Hence there is no such thing as distributed cognition, only collaborative cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brains’ real-time interactive potential in ways that were not possible in oral, written or print interactions. (shrink)
Scholars studying the origins and evolution of language are also interested in the general issue of the evolution of cognition. Language is not an isolated capability of the individual, but has intrinsic relationships with many other behavioral, cognitive, and social abilities. By understanding the mechanisms underlying the evolution of linguistic abilities, it is possible to understand the evolution of cognitive abilities. Cognitivism, one of the current approaches in psychology and cognitive science, proposes that symbol systems capture mental phenomena, and attributes (...) cognitive validity to them. Therefore, in the same way that language is considered the prototype of cognitive abilities, a symbol system has become the prototype for studying language and cognitive systems. Symbol systems are advantageous as they are easily studied through computer simulation (a computer program is a symbol system itself), and this is why language is often studied using computational models. (shrink)
Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the starting point for the (...) present debate, but is also a contribution to a longlasting discussion about such questions as: Can a computer think? If yes, would this be solely by virtue of its program? Is the Turing Test appropriate for deciding whether a computer thinks? (shrink)
Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able to do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing.This is called the Turing Test. It cannot test whether a process can generate feeling, hence (...) thinking — only whether it can generate doing. The processes that generate thinking and know-how are “distributed” within the heads of thinkers, but not across thinkers’ heads. Hence there is no such thing as distributed cognition, only collaborative cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brains’ real-time interactive potential in ways that were not possible in oral, written or print interactions. (shrink)
What language allows us to do is to "steal" categories quickly and effortlessly through hearsay instead of having to earn them the hard way, through risky and time-consuming sensorimotor "toil" (trial-and-error learning, guided by corrective feedback from the consequences of miscategorisation). To make such linguistic "theft" possible, however, some, at least, of the denoting symbols of language must first be grounded in categories that have been earned through sensorimotor toil (or else in categories that have already been "prepared" for us (...) through Darwinian theft by the genes of our ancestors); it cannot be linguistic theft all the way down. The symbols that denote categories must be grounded in the capacity to sort, label and interact with the proximal sensorimotor projections of their distal category-members in a way that coheres systematically with their semantic interpretations, both for individual symbols, and for symbols strung together to express truth-value-bearing propositions. (shrink)
Many special problems crop up when evolutionary theory turns, quite naturally, to the question of the adaptive value and causal role of consciousness in human and nonhuman organisms. One problem is that -- unless we are to be dualists, treating it as an independent nonphysical force -- consciousness could not have had an independent adaptive function of its own, over and above whatever behavioral and physiological functions it "supervenes" on, because evolution is completely blind to the difference between a conscious (...) organism and a functionally equivalent (Turing Indistinguishable) nonconscious "Zombie" organism: In other words, the Blind Watchmaker, a functionalist if ever there was one, is no more a mind reader than we are. Hence Turing-Indistinguishability = Darwin-Indistinguishability. It by no means follows from this, however, that human behavior is therefore to be explained only by the push-pull dynamics of Zombie determinism, as dictated by calculations of "inclusive fitness" and "evolutionarily stable strategies." We are conscious, and, more important, that consciousness is piggy-backing somehow on the vast complex of unobservable internal activity -- call it "cognition" -- that is really responsible for generating all of our behavioral capacities. Hence, except in the palpable presence of the irrational (e.g., our sexual urges) where distal Darwinian factors still have some proximal sway, it is as sensible to seek a Darwinian rather than a cognitive explanation for most of our current behavior as it is to seek a cosmological rather than an engineering explanation of an automobile's behavior. Let evolutionary theory explain what shaped our cognitive capacity (Steklis & Harnad 1976; Harnad 1996, but let cognitive theory explain our resulting behavior. (shrink)
How many words—and which ones—are sufficient to define all other words? When dictionaries are analyzed as directed graphs with links from defining words to defined words, they reveal a latent structure. Recursively removing all words that are reachable by definition but that do not define any further words reduces the dictionary to a Kernel of about 10% of its size. This is still not the smallest number of words that can define all the rest. About 75% of the Kernel turns (...) out to be its Core, a “Strongly Connected Subset” of words with a definitional path to and from any pair of its words and no word's definition depending on a word outside the set. But the Core cannot define all the rest of the dictionary. The 25% of the Kernel surrounding the Core consists of small strongly connected subsets of words: the Satellites. The size of the smallest set of words that can define all the rest—the graph's “minimum feedback vertex set” or MinSet—is about 1% of the dictionary, about 15% of the Kernel, and part-Core/part-Satellite. But every dictionary has a huge number of MinSets. The Core words are learned earlier, more frequent, and less concrete than the Satellites, which are in turn learned earlier, more frequent, but more concrete than the rest of the Dictionary. In principle, only one MinSet's words would need to be grounded through the sensorimotor capacity to recognize and categorize their referents. In a dual-code sensorimotor/symbolic model of the mental lexicon, the symbolic code could do all the rest through recombinatory definition. (shrink)
Turing's celebrated 1950 paper proposes a very general methodological criterion for modelling mental function: total functional equivalence and indistinguishability. His criterion gives rise to a hierarchy of Turing Tests, from subtotal ("toy") fragments of our functions (t1), to total symbolic (pen-pal) function (T2 -- the standard Turing Test), to total external sensorimotor (robotic) function (T3), to total internal microfunction (T4), to total indistinguishability in every empirically discernible respect (T5). This is a "reverse-engineering" hierarchy of (decreasing) empirical underdetermination of the theory (...) by the data. Level t1 is clearly too underdetermined, T2 is vulnerable to a counterexample (Searle's Chinese Room Argument), and T4 and T5 are arbitrarily overdetermined. Hence T3 is the appropriate target level for cognitive science. When it is reached, however, there will still remain more unanswerable questions than when Physics reaches its Grand Unified Theory of Everything (GUTE), because of the mind/body problem and the other-minds problem, both of which are inherent in this empirical domain, even though Turing hardly mentions them. (shrink)
Cognition is thinking; it feels like something to think, and only those who can feel can think. There are also things that thinkers can do. We know neither how thinkers can think nor how they are able to do what they can do. We are waiting for cognitive science to discover how. Cognitive science does this by testing hypotheses about what processes can generate what doing.This is called the Turing Test. It cannot test whether a process can generate feeling, hence (...) thinking — only whether it can generate doing. The processes that generate thinking and know-how are “distributed“ within the heads of thinkers, but not across thinkers' heads. Hence there is no such thing as distributed cognition, only collaborative cognition. Email and the Web have spawned a new form of collaborative cognition that draws upon individual brains' real-time interactive potential in ways that were not possible in oral, written or print interactions. (shrink)
Peer Review and Copyright each have a double role: Formal refereeing protects (R1) the author from publishing and (R2) the reader from reading papers that are not of sufficient quality. Copyright protects the author from (C1) theft of text and (C2) theft of authorship. It has been suggested that in the electronic medium we can dispense with peer review, "publish" everything, and let browsing and commentary do the quality control. It has also been suggested that special safeguards and laws may (...) be needed to enforce copyright on the Net. I will argue, based on 20 years of editing Behavioral and Brain Sciences, a refereed (paper) journal of peer commentary, 8 years of editing Psycoloquy, a refereed electronic journal of peer commentary, and 1 year of implementing CogPrints, an electronic archive of unrefereed preprints and refereed reprints in the cognitive sciences modeled on the Los Alamos Physics Eprint Archive, that (i) peer commentary is a supplement, not a substitute, for peer review, (ii) the authors of refereed papers, who get and seek no royalties from the sale of their texts, only want protection from theft of authorship on the Net, not from theft of text, which is a victimless crime, and hence (iii) the trade model (subscription, site license or pay- per-view) should be replaced by author page-charges to cover the much reduced cost of implementing peer review, editing and archiving on the Net, in exchange for making the learned serial corpus available for free for all forever. (shrink)
Some of the features of animal and human categorical perception (CP) for color, pitch and speech are exhibited by neural net simulations of CP with one-dimensional inputs: When a backprop net is trained to discriminate and then categorize a set of stimuli, the second task is accomplished by "warping" the similarity space (compressing within-category distances and expanding between-category distances). This natural side-effect also occurs in humans and animals. Such CP categories, consisting of named, bounded regions of similarity space, may be (...) the ground level out of which higher-order categories are constructed; nets are one possible candidate for the mechanism that learns the sensorimotor invariants that connect arbitrary names (elementary symbols?) to the nonarbitrary shapes of objects. This paper examines how and why such compression/expansion effects occur in neural nets. (shrink)
Both Artificial Life and Artificial Mind are branches of what Dennett has called "reverse engineering": Ordinary engineering attempts to build systems to meet certain functional specifications, reverse bioengineering attempts to understand how systems that have already been built by the Blind Watchmaker work. Computational modelling (virtual life) can capture the formal principles of life, perhaps predict and explain it completely, but it can no more be alive than a virtual forest fire can be hot. In itself, a computational model is (...) just an ungrounded symbol system; no matter how closely it matches the properties of what is being modelled, it matches them only formally, with the mediation of an interpretation. Synthetic life is not open to this objection, but it is still an open question how close a functional equivalence is needed in order to capture life. Close enough to fool the Blind Watchmaker is probably close enough, but would that require molecular indistinguishability, and if so, do we really need to go that far? (shrink)
Do scientists agree? It is not only unrealistic to suppose that they do, but probably just as unrealistic to think that they ought to. Agreement is for what is already established scientific history. The current and vital ongoing aspect of science consists of an active and often heated interaction of data, ideas and minds, in a process one might call "creative disagreement." The "scientific method" is largely derived from a reconstruction based on selective hindsight. What actually goes on has much (...) less the flavor of a systematic method than of trial and error, conjecture, chance, competition and even dialectic. (shrink)
Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols. Projecting an interpretation onto a symbol system results in its being reflected back, in a spuriously self-confirming way.
1.1 The predominant approach to cognitive modeling is still what has come to be called "computationalism" (Dietrich 1990, Harnad 1990b), the hypothesis that cognition is computation. The more recent rival approach is "connectionism" (Hanson & Burr 1990, McClelland & Rumelhart 1986), the hypothesis that cognition is a dynamic pattern of connections and activations in a "neural net." Are computationalism and connectionism really deeply different from one another, and if so, should they compete for cognitive hegemony, or should they collaborate? These (...) questions will be addressed here, in the context of an obstacle that is faced by computationalism (as well as by connectionism if it is either computational or seeks cognitive hegemony on its own): The symbol grounding problem (Harnad 1990). (shrink)
The Mind/Body Problem is about causation not correlation. And its solution will require a mechanism in which the mental component somehow manages to play a causal role of its own, rather than just supervening superflously on other, nonmental components that look, for all the world, as if they can do the full causal job perfectly well without it. Correlations confirm that M does indeed "supervene" on B, but causality is needed to show how/why M is not supererogatory; and that's the (...) hard part. (shrink)
William Gardner's proposal to establish a searchable, retrievable electronic archive is fine, as far as it goes. The potential role of electronic networks in scientific publication, however, goes far beyond providing searchable electronic archives for electronic journals. The whole process of scholarly communication is currently undergoing a revolution comparable to the one occasioned by the invention of printing. On the brink of intellectual perestroika is that vast PREPUBLICATION phase of scientific inquiry in which ideas and findings are discussed informally with (...) colleagues, presented more formally in seminars, conferences and symposia, and distributed still more widely in the form of preprints and tech reports that have undergone various degrees of peer review. It has now become possible to do all of this in a remarkable new way that is not only incomparably more thorough and systematic in its distribution, potentially global in scale, and almost instantaneous in speed, but so unprecedentedly interactive that it will substantially restructure the pursuit of knowledge. (shrink)
After people learn to sort objects into categories they see them differently. Members of the same category look more alike and members of different categories look more different. This phenomenon of within-category compression and between-category separation in similarity space is called categorical perception (CP). It is exhibited by human subjects, animals and neural net models. In backpropagation nets trained first to auto-associate 12 stimuli varying along a onedimensional continuum and then to sort them into 3 categories, CP arises as a (...) natural side-effect because of four factors: (1) Maximal interstimulus separation in hidden-unit space during autoassociation learning, (2) movement toward linear separability during categorization learning, (3) inverse-distance repulsive force exerted by the between-category boundary, and (4) the modulating effects of input iconicity, especially in interpolating CP to untrained regions of the continuum. Once similarity space has been "warped" in this way, the compressed and separated "chunks" have symbolic labels which could then be combined into symbol strings that constitute propositions about objects. The meanings of such symbolic representations would be "grounded" in the system's capacity to pick out from their sensory projections the object categories that the propositions were about. (shrink)
Some of the papers in this special issue distribute cognition between what is going on inside individual cognizers' heads and their outside worlds; others distribute cognition among different individual cognizers. Turing's criterion for cognition was individual, autonomous input/output capacity. It is not clear that distributed cognition could pass the Turing Test.
"Symbol Grounding" is beginning to mean too many things to too many people. My own construal has always been simple: Cognition cannot be just computation, because computation is just the systematically interpretable manipulation of meaningless symbols, whereas the meanings of my thoughts don't depend on their interpretability or interpretation by someone else. On pain of infinite regress, then, symbol meanings must be grounded in something other than just their interpretability if they are to be candidates for what is going on (...) in our heads. Neural nets may be one way to ground the names of concrete objects and events in the capacity to categorize them (by learning the invariants in their sensorimotor projections). These grounded elementary symbols could then be combined into symbol strings expressing propositions about more abstract categories. Grounding does not equal meaning, however, and does not solve any philosophical problems. (shrink)
According to "computationalism" (Newell, 1980; Pylyshyn 1984; Dietrich 1990), mental states are computational states, so if one wishes to build a mind, one is actually looking for the right program to run on a digital computer. A computer program is a semantically interpretable formal symbol system consisting of rules for manipulating symbols on the basis of their shapes, which are arbitrary in relation to what they can be systematically interpreted as meaning. According to computationalism, every physical implementation of the right (...) symbol system will have mental states. (shrink)
This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing.
Artificial life can take two forms: synthetic and virtual. In principle, the materials and properties of synthetic living systems could differ radically from those of natural living systems yet still resemble them enough to be really alive if they are grounded in the relevant causal interactions with the real world. Virtual (purely computational) "living" systems, in contrast, are just ungrounded symbol systems that are systematically interpretable as if they were alive; in reality they are no more alive than a virtual (...) furnace is hot. Virtual systems are better viewed as "symbolic oracles" that can be used (interpreted) to predict and explain real systems, but not to instantiate them. The vitalistic overinterpretation of virtual life is related to the animistic overinterpretation of virtual minds and is probably based on an implicit (and possibly erroneous) intuition that living things have actual or potential mental lives. (shrink)
The experimental analysis of naming behavior can tell us exactly the kinds of things Horne & Lowe (H & L) report here: (1) the conditions under which people and animals succeed or fail in naming things and (2) the conditions under which bidirectional associations are formed between inputs (objects, pictures of objects, seen or heard names of objects) and outputs (spoken names of objects, multimodal operations on objects). The "stimulus equivalence" that H & L single out is really just the (...) reflexive, symmetric and transitive property of pairwise associations among the above. This is real and of some interest, but it unfortunately casts very little light on symbolization and language in general, and naming capacity in particular. The associative equivalence between name and object is trivial in relation to the real question, which is: How do we (or any system that can do it) manage to connect names to things correctly (Harnad 1987, 1990, 1992)? The experimental analysis of naming behavior begs this question entirely, simply taking it for granted that the connection is somehow successfully accomplished. (shrink)
Critique of Computationalism as merely projecting hermeneutics (i.e., meaning originating from the mind of an external interpreter) onto otherwise intrinsically meaningless symbols.
It is hypothesized that words originated as the names of perceptual categories and that two forms of representation underlying perceptual categorization -- iconic and categorical representations -- served to ground a third, symbolic, form of representation. The third form of representation made it possible to name and describe our environment, chiefly in terms of categories, their memberships, and their invariant features. Symbolic representations can be shared because they are intertranslatable. Both categorization and translation are approximate rather than exact, but the (...) approximation can be made as close as we wish. This is the central property of that universal mechanism for sharing descriptions that we call natural language. (shrink)
Libet, Gleason, Wright, & Pearl (1983) asked participants to report the moment at which they freely decided to initiate a pre-specified movement, based on the position of a red marker on a clock. Using event-related potentials (ERPs), Libet found that the subjective feeling of deciding to perform a voluntary action came after the onset of the motor “readiness potential,” RP). This counterintuitive conclusion poses a challenge for the philosophical notion of free will. Faced with these findings, Libet (1985) proposed that (...) conscious volitional control might operate as a selector and a controller of volitional processes rather than as an initiator of them. (shrink)
I am going to attempt to argue, given certain premises, there are reasons, not only empirical, but also logical, for expecting a certain division of labor in the processing of information by the human brain. This division of labor consists specifically of a functional bifurcation into what may be called, to a first approximation, "verbal" and "nonverbal" modes of information- processing. That this dichotomy is not quite satisfactory, however, will be one of the principal conclusions of this chapter, for I (...) shall attempt to show that metaphor, which in its most common guise is a literary, and hence a fortiori a "verbal" phenomenon, may in fact be more a function of the "nonverbal" than the "verbal" mode. (For alternative attempts to account for cognitive lateralization, see e.g. Bever, 1975; Wickelgren, 1975; Pendse, 1978.). (shrink)