Abstract
The symbol grounding problem is concerned with the question of how the knowledge used in AI programs, expressed as tokens in one form or another or simply symbols, could be grounded to the outside world. By grounding the symbols, it is meant that the system will know the actual objects, events, or states of affairs in the world to which each symbol refers and thus be worldly-wise. Solving this problem, it was hoped, would enable the program to understand its own action and hence be truly intelligent ). The problem becomes more acute after a challenge posed by Searle in his now famous Chinese Room Gedanken experiment. Searle argued that no AI programs can be said to understand or have other cognitive states, if all they do is formal symbol manipulation.