Abstract
John Searle has argued that the aim of strong AI to create a thinking computer is misguided. Searle's "Chinese Room Argument" purports to show that syntax does not suffice for semantics and that computer programs as such must fail to have intrinsic intentionality But we are not mainly interested in the program itself, but rather the implementation of the program in some material. It does not follow by necessity from the fact that computer programs are defined syntactically that the implementation of them cannot suffice for semantics. Perhaps our world is a world in which any implementation of the right computer program will create a system with intrinsic intentionality, in which case Searle's "Chinese Room Scenario" is empirically impossible. But perhaps our world is a world in which Searle's "Chinese Room Scenario" is empirically possible, and the silicon basis of modern-day computers is one kind of material unsuited to give you intrinsic intentionality. The metaphysical question turns out to be a question of what kind of world we are in, and I argue that in this respect we do not know our model address. The "Model Address Argument" does not ensure that strong AI will succeed, but it shows that Searle's challenge to the research program of strong AI fails in its objectives.