Any creature that must move around in its environment to find nutrients and mates, in order to survive and reproduce, faces the problem of sensorimotor control. A solution to this problem requires an on-board control mechanism that can shape the creature’s behaviour so as to render it “appropriate” to the conditions that obtain. There are at least three ways in which such a control mechanism can work, and Nature has exploited them all. The first and most basic way is for (...) a creature to bump into the things in its environment, and then, depending on what has been encountered, seek to modify its behaviour accordingly. Such an approach is risky, however, since some things in the environment are distinctly unfriendly. A second and better way, therefore, is for a creature to exploit ambient forms of energy that carry information about the distal structure of the environment. This is an improvement on the first method since it enables the creature to respond to the surroundings without actually bumping into anything. Nonetheless, this second method also has its limitations, one of which is that the information conveyed by such ambient energy is often impoverished, ambiguous and intermittent. (shrink)
When cognitive scientists apply computational theory to the problem of phenomenal consciousness, as many of them have been doing recently, there are two fundamentally distinct approaches available. Either consciousness is to be explained in terms of the nature of the representational vehicles the brain deploys; or it is to be explained in terms of the computational processes defined over these vehicles. We call versions of these two approaches _vehicle_ and _process_ theories of consciousness, respectively. However, while there may be space (...) for vehicle theories of consciousness in cognitive science, they are relatively rare. This is because of the influence exerted, on the one hand, by a large body of research which purports to show that the explicit representation of information in the brain and conscious experience are _dissociable_, and on the other, by the _classical_ computational theory of mind – the theory that takes human cognition to be a species of symbol manipulation. But two recent developments in cognitive science combine to suggest that a reappraisal of this situation is in order. First, a number of theorists have recently been highly critical of the experimental methodologies employed in the dissociation studies – so critical, in fact, it’s no longer reasonable to assume that the dissociability of conscious experience and explicit representation has been adequately demonstrated. Second, classicism, as a theory of human cognition, is no longer as dominant in cognitive science as it once was. It now has a lively competitor in the form of _connectionism; _and connectionism, unlike classicism, does have the computational resources to support a robust vehicle theory of consciousness. In this paper we develop and defend this connectionist vehicle theory of consciousness. It takes the form of the following simple empirical hypothesis: _phenomenal experience consists in the explicit_ _representation of information in neurally realized PDP networks_.. (shrink)
Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its _computational_ credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we examine (...) what might be regarded as the “conventional” account of connectionist computation. We show why this account is inadequate and hence fosters the suspicion that connectionist networks aren’t genuinely computational. Lastly, we turn to the principal task of the paper: the development of a more robust portrait of connectionist computation. The basis of this portrait is an explanation of the representational capacities of connection weights, supported by an analysis of the weight configurations of a series of simulated neural networks. (shrink)
Reformers urge that representation no longer earns its explanatory keep in cognitive science, and that it is time to discard this troublesome concept. In contrast, we hold that without representation cognitive science is utterly bereft of tools for explaining natural intelligence. In order to defend the latter position, we focus on the explanatory role of representation in computation. We examine how the methods of digital and analog computation are used to model a relatively simple target system, and show that representation (...) plays an in-eliminable explanatory role in both cases. We conclude that, to the extent that biologic systems engage in computation, representation is destined to play an explanatory role in cognitive science. (shrink)
It is commonplace for both philosophers and cognitive scientists to express their allegiance to the "unity of consciousness". This is the claim that a subjects phenomenal consciousness, at any one moment in time, is a single thing. This view has had a major influence on computational theories of consciousness. In particular, what we call single-track theories dominate the literature, theories which contend that our conscious experience is the result of a single consciousness-making process or mechanism in the brain. We argue (...) that the orthodox view is quite wrong: phenomenal experience is not a unity, in the sense of being a single thing at each instant. It is a multiplicity, an aggregate of phenomenal elements, each of which is the product of a distinct consciousness-making mechanism in the brain. Consequently, cognitive science is in need of a multi-track theory of consciousness; a computational model that acknowledges both the manifold nature of experience, and its distributed neural basis. (shrink)
In Connectionism and the Philosophy of Psychology, Horgan and Tienson (1996) argue that cognitive processes, pace classicism, are not governed by exceptionless, representation-level rules; they are instead the work of defeasible cognitive tendencies subserved by the non-linear dynamics of the brains neural networks. Many theorists are sympathetic with the dynamical characterisation of connectionism and the general (re)conception of cognition that it affords. But in all the excitement surrounding the connectionist revolution in cognitive science, it has largely gone unnoticed that connectionism (...) adds to the traditional focus on computational processes, a new focus one on the vehicles of mental representation, on the entities that carry content through the mind. Indeed, if Horgan and Tiensons dynamical characterisation of connectionism is on the right track, then so intimate is the relationship between computational processes and representational vehicles, that connectionist cognitive science is committed to a resemblance theory of mental content. (shrink)
When it comes to applying computational theory to the problem of phenomenal consciousness, cognitive scientists appear to face a dilemma. The only strategy that seems to be available is one that explains consciousness in terms of special kinds of computational processes. But such theories, while they dominate the field, have counter-intuitive consequences; in particular, they force one to accept that phenomenal experience is composed of information processing effects. For cognitive scientists, therefore, it seems to come down to a choice between (...) a counter-intuitive theory or no theory at all. We offer a way out of this dilemma. We argue that the computational theory of mind doesn't force cognitive scientists to explain consciousness in terms of computational processes, as there is an alternative strategy available: one that focuses on the representational vehicles that encode information in the brain. This alternative approach to consciousness allows us to do justice to the standard intuitions about phenomenal experience, yet remain within the confines of cognitive science. (shrink)
We think the best prospect for a naturalistic explanation of phenomenal consciousness is to be found at the confluence of two influential ideas about the mind. The first is the _computational _ _theory of mind_: the theory that treats human cognitive processes as disciplined operations over neurally realised representing vehicles.1 The second is the _representationalist theory of _ _consciousness_: the theory that takes the phenomenal character of conscious experiences (the “what-it-is-likeness”) to be constituted by their representational content.2 Together these two (...) theories suggest that phenomenal consciousness might be explicable in terms of the representational content of the neurally realised representing vehicles that are generated and manipulated in the course of cognition. The simplest and most elegant hypothesis that one might entertain in this regard is that conscious experiences are identical to (i.e., are one and the same as) the brain’s representing vehicles. (shrink)
In this paper we defend a position we call radical connectionism. Radical connectionism claims that cognition _never_ implicates an internal symbolic medium, not even when natural language plays a part in our thought processes. On the face of it, such a position renders the human capacity for abstract thought quite mysterious. However, we argue that connectionism is committed to an analog conception of neural computation, and that representation of the abstract is no more problematic for a system of analog vehicles (...) than for a symbol system. Natural language is therefore not required as a representational medium for abstract thought. Since natural language is arguably not a representational medium _at all_, but a conventionally governed scheme of communicative signals, we suggest that the role of internalised (i.e., self- directed) language is best conceived in terms of the coordination and control of cognitive activities within the brain. (shrink)
Cognitive science is founded on the conjecture that natural intelligence can be explained in terms of computation. Yet, notoriously, there is no consensus among philosophers of cognitive science as to how computation should be characterised. While there are subtle differences between the various accounts of computation found in the literature, the largest fracture exists between those that unpack computation in semantic terms (and hence view computation as the processing of representations) and those, such as that defended by Chalmers (2011), that (...) cleave towards a purely syntactic formulation (and hence view computation in terms of abstract functional organisation). It will be the main contention of this paper that this dispute arises because contemporary computer science is an amalgam of two different historical traditions, each of which has developed its own proprietary conception of computation. Once these historical trajectories have been properly delineated, and the motivations behind the associated conceptions of computation revealed, it becomes a little clearer which should form the foundation for cognitive science. (shrink)
The connectionist vehicle theory of phenomenal experience in the target article identifies consciousness with the brain’s explicit representation of information in the form of stable patterns of neural activity. Commentators raise concerns about both the conceptual and empirical adequacy of this proposal. On the former front they worry about our reliance on vehicles, on representation, on stable patterns of activity, and on our identity claim. On the latter front their concerns range from the general plausibility of a vehicle theory to (...) our specific attempts to deal with the dissociation studies. We address these concerns, and then finish by considering whether the vehicle theory we have defended has a coherent story to tell about the active, unified subject to whom conscious experiences belong. (shrink)
The project of the paper is a critical examination of the "strong thesis of eliminative materialism" in the philosophy of mind--The claim that all the mental entities that constitute the framework of commonsense psychology are, In principle at least, Eliminable from our ontology. The central conclusion reached is that the traditional formulation of this thesis is demonstrably untenable as it rests on a mistaken view of the relationship between our psychological self-Knowledge and language.
O'Regan & Noë (O&N) fail to address adequately the two most historically important reasons for seeking to explain visual experience in terms of internal representations. They are silent about the apparently inferential nature of perception, and mistaken about the significance of the phenomenology accompanying dreams, hallucinations, and mental imagery.
This commentry focuses on the one major ecumenical theme propounded in Andy Clark's Being There that I find difficult to accept; this is Clarks advocacy, especially in the third and final part of the book, of the extended nature of the embedded, embodied mind.
In recent years, a number of contemporary proponents of psychoanalysis have sought to derive support for their conjectures about the _dynamic_ unconscious from the empirical evidence in favor of the _cognitive_ unconscious. It is our contention, however, that far from supporting the dynamic unconscious, recent work in cognitive science suggests that the time has come to dispense with this concept altogether. In this paper we defend this claim in two ways. First, we argue that any attempt to shore up the (...) dynamic unconscious with the cognitive unconscious is bound to fail, simply because the latter, as it is understood in contemporary cognitive science, is incompatible with the former, as it is traditionally conceived by psychoanalytic theory. Second, we show how psychological phenomena traditionally cited as evidence for the operation of a dynamic unconscious can be accommodated more parsimoniously by other means. (shrink)
In restricting his analysis to the causal relations of functionalism, on the one hand, and the neurophysiological realizers of biology, on the other, Palmer has overlooked an alternative conception of the relationship between color experience and the brain - one that liberalises the relation between mental phenomena and their physical implementation, without generating functionalism.
Dienes & Perner offer us a theory of explicit and implicit knowledge that promises to systematise a large and diverse body of research in cognitive psychology. Their advertised strategy is to unpack this distinction in terms of explicit and implicit representation. But when one digs deeper one finds the “Higher-Order Thought” theory of consciousness doing much of the work. This reduces both the plausibility and usefulness of their account. We think their strategy is broadly correct, but that consensus on the (...) explicit/implicit knowledge distinction is still a fair way off. (shrink)
One of the most striking manifestations of schizophrenia is thought insertion. People suffering from this delusion believe they are not the author of thoughts which they nevertheless own as experiences. It seems that a person’s sense of agency and their sense of the boundary between mind and world can come apart. Schizophrenia thus vividly demonstrates that self awareness is a complex construction of the brain. This point is widely appreciated. What is not so widely appreciated is how radically schizophrenia challenges (...) our assumptions about the nature of the self. Most theorists endorse the traditional doctrine of the unity of consciousness, according to which a normal human brain generates a single consciousness at any instant in time. In this paper we argue that phenomenal consciousness at each instant is actually a multiplicity: an aggregate of phenomenal elements, each of which is the product of a distinct consciousness-making mechanism in the brain. We then consider how certain aspects of self might emerge from this manifold substrate, and speculate about the origin of thought insertion. (shrink)
The distinction at the heart of van Gelder’s target article is one between digital computers and dynamical systems. But this distinction conflates two more fundamental distinctions in cognitive science that should be keep apart. When this conflation is undone, it becomes apparent that the “computational hypothesis” (CH) is not as dominant in contemporary cognitive science as van Gelder contends; nor has the “dynamical hypothesis” (DH) been neglected.
We are sympathetic with the broad aims of Perruchet & Vinter's “mentalistic” framework. But it is implausible to claim, as they do, that human cognition can be understood without recourse to unconsciously represented information. In our view, this strategy forsakes the only available mechanistic understanding of intelligent behaviour. Our purpose here is to plot a course midway between the classical unconscious and Perruchet &Vinter's own noncomputational associationism.
Puccetti argues that Dennett's views on split brains are defective. First, we criticise Puccetti's argument. Then we distinguish persons, minds, consciousnesses, selves and personalities. Then we introduce the concepts of part-persons and part-consciousnesses, and apply them to clarifying the situation. Finally, we criticise Dennett for some contribution to the confusion.
Kubovy and Epstein distinguish between systems that follow rules, and those that merely instantiate them. They regard compliance with the principles of kinematic geometry in apparent motion as a case of instantiation. There is, however, some reason to believe that the human visual system internalizes the principles of kinematic geometry, even if it does not explicitly represent them. We offer functional resemblance as a criterion for internal representation. [Kubovy & Epstein].
Stich begins his paper "What is a Theory of Mental Representation?" by noting that while there is a dizzying range of theories of mental representation in today's philosophical market place, there is very little self-conscious reflection about what a theory of mental representation is supposed to do. This is quite remarkable, he thinks, because if we bother to engage in such reflection, some very surprising conclusions begin to emerge. The most surprising conclusion of all, according to Stich, is that most (...) of the philosophers in this field are undertaking work that is quite futile: " It is my contention that most of the players in this very crowded field have _no_ coherent project that could possibly be pursued successfully with the methods they are using. " Stich readily admits that this is a startling conclusion; so startling, he thinks, that some may even take it as an indication that he has simply "failed to figure out what those who are searching for a theory of mental representation are up to". But it is a conclusion that he is willing to stand by, and he sets about it defending it in the body of his paper. (shrink)
Martínez-Manrique contends that we overlook a possible nonconnectionist vehicle theory of consciousness. We argue that the position he develops is better understood as a hybrid vehicle/process theory. We assess this theory and in doing so clarify the commitments of both vehicle and process theories of consciousness.
Carruthers presents evidence concerning the cross-modular integration of information in human subjects which appears to support the “cognitive conception of language.” According to this conception, language is not just a means of communication, but also a representational medium of thought. However, Carruthers overlooks the possibility that language, in both its communicative and cognitive roles, is a nonrepresentational system of conventional signals – that words are not a medium we think in, but a tool we think with. The evidence he cites (...) is equivocal when it comes to choosing between the cognitive conception and this radical communicative conception of language. (shrink)