Abstract
I argue that consciousness is an aspect of an agent's intelligence, hence of its ability to deal adaptively with the world. In particular, it allows for the possibility of noting and correcting the agent's errors, as actions performed by itself. This in turn requires a robust self-concept as part of the agent's world model; the appropriate notion of self here is a special one, allowing for a very strong kind of self-reference. It also requires the capability to come to see that world model as residing in its belief base , while then representing the actual world as possibly different, i.e., forming a new world-model. This suggests particular computational mechanisms by which consciousness occurs, ones that conceivably could be discovered by neuroscientists, as well as built into artificial systems that may need such capabilities. Consciousness, then, is not an epiphenomenon at all, but rather a key part of the functional architecture of suitably intelligent agents, hence amenable to study as much as any other architectural feature. I also argue that ignorance of how subjective states could be essentially functional does not itself lend credibility to the view that such states are not essentially functional; the strong self-reference proposal here is one possible functional explanation of consciousness