Any creature that must move around in its environment to find nutrients and mates, in order to survive and reproduce, faces the problem of sensorimotor control. A solution to this problem requires an on-board control mechanism that can shape the creature’s behaviour so as to render it “appropriate” to the conditions that obtain. There are at least three ways in which such a control mechanism can work, and Nature has exploited them all. The first and most basic way is for (...) a creature to bump into the things in its environment, and then, depending on what has been encountered, seek to modify its behaviour accordingly. Such an approach is risky, however, since some things in the environment are distinctly unfriendly. A second and better way, therefore, is for a creature to exploit ambient forms of energy that carry information about the distal structure of the environment. This is an improvement on the first method since it enables the creature to respond to the surroundings without actually bumping into anything. Nonetheless, this second method also has its limitations, one of which is that the information conveyed by such ambient energy is often impoverished, ambiguous and intermittent. (shrink)
When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as many of them have been doing recently, there are two fundamentally distinct approaches available. Either consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or it is to be explained in terms of the computational processes defined over these vehicles. We call versions of these two approaches _vehicle_ and _process_ theories of consciousness, respectively. However, while there may be space (...) for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because of the influence exerted, on the one hand, by a large body of research which purports to show that the explicit representation of information in the brain and conscious experience are _dissociable_, and on the other, by the _classical_ computational theory of mind – the theory that takes human cognition to be a species of symbol manipulation. But two recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. It now has a lively competitor in the form of _connectionism; _and connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It takes the form of the following simple empirical hypothesis: _phenomenal experience consists in the explicit_ _representation of information in neurally realized PDP networks_.. (shrink)
Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its _computational_ credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we examine (...) what might be regarded as the “conventional” account of connectionist computation. We show why this account is inadequate and hence fosters the suspicion that connectionist networks aren’t genuinely computational. Lastly, we turn to the principal task of the paper: the development of a more robust portrait of connectionist computation. The basis of this portrait is an explanation of the representational capacities of connection weights, supported by an analysis of the weight configurations of a series of simulated neural networks. (shrink)
It is commonplace for both philosophers and cognitive scientists to express their allegiance to the "unity of consciousness". This is the claim that a subjects phenomenal consciousness, at any one moment in time, is a single thing. This view has had a major influence on computational theories of consciousness. In particular, what we call single-track theories dominate the literature, theories which contend that our conscious experience is the result of a single consciousness-making process or mechanism in the brain. We argue (...) that the orthodox view is quite wrong: phenomenal experience is not a unity, in the sense of being a single thing at each instant. It is a multiplicity, an aggregate of phenomenal elements, each of which is the product of a distinct consciousness-making mechanism in the brain. Consequently, cognitive science is in need of a multi-track theory of consciousness; a computational model that acknowledges both the manifold nature of experience, and its distributed neural basis. (shrink)
We think the best prospect for a naturalistic explanation of phenomenal consciousness is to be found at the confluence of two influential ideas about the mind. The first is the _computational _ _theory of mind_: the theory that treats human cognitive processes as disciplined operations over neurally realised representing vehicles.1 The second is the _representationalist theory of _ _consciousness_: the theory that takes the phenomenal character of conscious experiences (the “what-it-is-likeness”) to be constituted by their representational content.2 Together these two (...) theories suggest that phenomenal consciousness might be explicable in terms of the representational content of the neurally realised representing vehicles that are generated and manipulated in the course of cognition. The simplest and most elegant hypothesis that one might entertain in this regard is that conscious experiences are identical to (i.e., are one and the same as) the brain’s representing vehicles. (shrink)
One of the principal tasks Dennett sets himself in Consciousness Explained is to demolish the Cartesian theater model of phenomenal consciousness, which in its contemporary garb takes the form of Cartesian materialism: the idea that conscious experience is a process of presentation realized in the physical materials of the brain. The now standard response to Dennett is that, in focusing on Cartesian materialism, he attacks an impossibly naive account of consciousness held by no one currently working in cognitive science or (...) the philosophy of mind. Our response is quite different. We believe that, once properly formulated, Cartesian materialism is no straw man. Rather, it is an attractive hypothesis about the relationship between the computational architecture of the brain and phenomenal consciousness, and hence one that is worthy of further exploration. Consequently, our primary aim in this paper is to defend Cartesian materialism from Dennett’s assault. We do this by showing that Dennett’s argument against this position is founded on an implicit assumption (about the relationship between phenomenal experience and information coding in the brain), which while valid in the context of classical cognitive science, is not forced on connectionism. (shrink)
In this paper we defend a position we call radical connectionism. Radical connectionism claims that cognition _never_ implicates an internal symbolic medium, not even when natural language plays a part in our thought processes. On the face of it, such a position renders the human capacity for abstract thought quite mysterious. However, we argue that connectionism is committed to an analog conception of neural computation, and that representation of the abstract is no more problematic for a system of analog vehicles (...) than for a symbol system. Natural language is therefore not required as a representational medium for abstract thought. Since natural language is arguably not a representational medium _at all_, but a conventionally governed scheme of communicative signals, we suggest that the role of internalised (i.e., self- directed) language is best conceived in terms of the coordination and control of cognitive activities within the brain. (shrink)
One of the principal tasks Dennett sets himself in Consciousness Explained is to demolish the Cartesian theater model of phenomenal consciousness, which in its contemporary garb takes the form of Cartesian materialism: the idea that conscious experience is a process of presentation realized in the physical materials of the brain. The now standard response to Dennett is that, in focusing on Cartesian materialism, he attacks an impossibly naive account of consciousness held by no one currently working in cognitive science or (...) the philosophy of mind. Our response is quite different. We believe that, once properly formulated, Cartesian materialism is no straw man. Rather, it is an attractive hypothesis about the relationship between the computational architecture of the brain and phenomenal consciousness, and hence one that is worthy of further exploration. Consequently, our primary aim in this paper is to defend Cartesian materialism from Dennett’s assault. We do this by showing that Dennett’s argument against this position is founded on an implicit assumption (about the relationship between phenomenalexperience and information coding in the brain), which while valid in the context of classical cognitive science, is not forced on connectionism. (shrink)
The connectionist vehicle theory of phenomenal experience in the target article identifies consciousness with the brain’s explicit representation of information in the form of stable patterns of neural activity. Commentators raise concerns about both the conceptual and empirical adequacy of this proposal. On the former front they worry about our reliance on vehicles, on representation, on stable patterns of activity, and on our identity claim. On the latter front their concerns range from the general plausibility of a vehicle theory to (...) our specific attempts to deal with the dissociation studies. We address these concerns, and then finish by considering whether the vehicle theory we have defended has a coherent story to tell about the active, unified subject to whom conscious experiences belong. (shrink)
In restricting his analysis to the causal relations of functionalism, on the one hand, and the neurophysiological realizers of biology, on the other, Palmer has overlooked an alternative conception of the relationship between color experience and the brain - one that liberalises the relation between mental phenomena and their physical implementation, without generating functionalism.
Dienes & Perner offer us a theory of explicit and implicit knowledge that promises to systematise a large and diverse body of research in cognitive psychology. Their advertised strategy is to unpack this distinction in terms of explicit and implicit representation. But when one digs deeper one finds the “Higher-Order Thought” theory of consciousness doing much of the work. This reduces both the plausibility and usefulness of their account. We think their strategy is broadly correct, but that consensus on the (...) explicit/implicit knowledge distinction is still a fair way off. (shrink)
One of the most striking manifestations of schizophrenia is thought insertion. People suffering from this delusion believe they are not the author of thoughts which they nevertheless own as experiences. It seems that a person’s sense of agency and their sense of the boundary between mind and world can come apart. Schizophrenia thus vividly demonstrates that self awareness is a complex construction of the brain. This point is widely appreciated. What is not so widely appreciated is how radically schizophrenia challenges (...) our assumptions about the nature of the self. Most theorists endorse the traditional doctrine of the unity of consciousness, according to which a normal human brain generates a single consciousness at any instant in time. In this paper we argue that phenomenal consciousness at each instant is actually a multiplicity: an aggregate of phenomenal elements, each of which is the product of a distinct consciousness-making mechanism in the brain. We then consider how certain aspects of self might emerge from this manifold substrate, and speculate about the origin of thought insertion. (shrink)
Green offers us two options: either connectionist models are literal models of brain activity or they are mere instruments, with little or no ontological significance. According to Green, only the first option renders connectionist models genuinely explanatory. I think there is a third possibility. Connectionist models are not literal models of brain activity, but neither are they mere instruments. They are abstract, IDEALISED models of the brain that are capable of providing genuine explanations of cognitive phenomena.
Consciousness is a pretty sexy topic right now, as the plethora of recent books on the subject demonstrate. Everyone is having a go at it: philosophers, psychologists, neuroscientists and physicists, to mention just a few. And for every discipline or sub-discipline that pretends to some insight on the matter we find not only a different explanatory strategy, but a different take on the explanandum – there is widespread disagreement about what a theory of consciousness should actually explain. However, one thing (...) seems to be agreed by all concerned: consciousness, whatever it is, is deeply mysterious. (shrink)
Martínez-Manrique contends that we overlook a possible nonconnectionist vehicle theory of consciousness. We argue that the position he develops is better understood as a hybrid vehicle/process theory. We assess this theory and in doing so clarify the commitments of both vehicle and process theories of consciousness.