Grounding symbols in the analog world with neural nets

Think (misc) 2 (1):12-78 (1993)
  Copy   BIBTEX

Abstract

Harnad's main argument can be roughly summarised as follows: due to Searle's Chinese Room argument, symbol systems by themselves are insufficient to exhibit cognition, because the symbols are not grounded in the real world, hence without meaning. However, a symbol system that is connected to the real world through transducers receiving sensory data, with neural nets translating these data into sensory categories, would not be subject to the Chinese Room argument. Harnad's article is not only the starting point for the present debate, but is also a contribution to a longlasting discussion about such questions as: Can a computer think? If yes, would this be solely by virtue of its program? Is the Turing Test appropriate for deciding whether a computer thinks?

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2009-01-28

Downloads
86 (#192,854)

6 months
6 (#504,917)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Stevan Harnad
McGill University

References found in this work

No references found.

Add more references