Abstract
In his most comprehensive book on the subject , Roger Penrose provides arguments to demonstrate that there are aspects of human understanding which could not, in principle, be attained by any purely computational system. His central argument relies crucially on oft-cited theorems proven by Gödel and Turing. However, that key argument has been the subject of numerous trenchant critiques, which is unfortunate if one believes Penrose's conclusions to be plausible. In the present article, alternative arguments are offered in support of Penrose-like conclusions . It is argued here that a purely computational agent, which lacked conscious awareness, would be incapable of possessing crucial concepts and of understanding certain kinds of geometrically-based proofs. Specifically, it is argued that the acquisition of human-like concepts of countable and non-denumerable infinities, and human-like comprehension of a particular geometrically motivated proof does require conscious apprehension of the subject matter involved. This does not preclude the possibility that a computational agent might come to possess the requisite consciousness, but it is argued that if this consciousness does arise within the agent, it does so, at best, as an emergent, contingent side-effect of the underlying processes involved