Does mind piggyback on robotic and symbolic capacity?

In H. Morowitz & J. Singer (eds.), The Mind, the Brain, and Complex Adaptive Systems. Addison Wesley (1995)
  Copy   BIBTEX

Abstract

Cognitive science is a form of "reverse engineering" (as Dennett has dubbed it). We are trying to explain the mind by building (or explaining the functional principles of) systems that have minds. A "Turing" hierarchy of empirical constraints can be applied to this task, from t1, toy models that capture only an arbitrary fragment of our performance capacity, to T2, the standard "pen-pal" Turing Test (total symbolic capacity), to T3, the Total Turing Test (total symbolic plus robotic capacity), to T4 (T3 plus internal [neuromolecular] indistinguishability). All scientific theories are underdetermined by data. What is the right level of empirical constraint for cognitive theory? I will argue that T2 is underconstrained (because of the Symbol Grounding Problem and Searle's Chinese Room Argument) and that T4 is overconstrained (because we don't know what neural data, if any, are relevant). T3 is the level at which we solve the "other minds" problem in everyday life, the one at which evolution operates (the Blind Watchmaker is no mind-reader either) and the one at which symbol systems can be grounded in the robotic capacity to name and manipulate the objects their symbols are about. I will illustrate this with a toy model for an important component of T3 -- categorization -- using neural nets that learn category invariance by "warping" similarity space the way it is warped in human categorical perception: within-category similarities are amplified and between-category similarities are attenuated. This analog "shape" constraint is the grounding inherited by the arbitrarily shaped symbol that names the category and by all the symbol combinations it enters into. No matter how tightly one constrains any such model, however, it will always be more underdetermined than normal scientific and engineering theory. This will remain the ineliminable legacy of the mind/body problem

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2009-01-28

Downloads
44 (#352,984)

6 months
2 (#1,232,442)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Stevan Harnad
McGill University

Citations of this work

Minds, machines and Turing: The indistinguishability of indistinguishables.Stevan Harnad - 2000 - Journal of Logic, Language and Information 9 (4):425-445.

Add more citations

References found in this work

No references found.

Add more references