An emerging class of theories concerning the functional structure of the brain takes the reuse of neural circuitry for various cognitive purposes to be a central organizational principle. According to these theories, it is quite common for neural circuits established for one purpose to be exapted (exploited, recycled, redeployed) during evolution or normal development, and be put to different uses, often without losing their original functions. Neural reuse theories thus differ from the usual understanding of the role of neural plasticity (...) (which is, after all, a kind of reuse) in brain organization along the following lines: According to neural reuse, circuits can continue to acquire new uses after an initial or original function is established; the acquisition of new uses need not involve unusual circumstances such as injury or loss of established function; and the acquisition of a new use need not involve (much) local change to circuit structure (e.g., it might involve only the establishment of functional connections to new neural partners). Thus, neural reuse theories offer a distinct perspective on several topics of general interest, such as: the evolution and development of the brain, including (for instance) the evolutionary-developmental pathway supporting primate tool use and human language; the degree of modularity in brain organization; the degree of localization of cognitive function; and the cortical parcellation problem and the prospects (and proper methods to employ) for function to structure mapping. The idea also has some practical implications in the areas of rehabilitative medicine and machine interface design. (shrink)
The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
The nature of cognition is being re-considered. Instead of emphasizing formal operations on abstract symbols, the new approach foregrounds the fact that cognition is, rather, a situated activity, and suggests that thinking beings ought therefore be considered first and foremost as acting beings. The essay reviews recent work in Embodied Cognition, provides a concise guide to its principles, attitudes and goals, and identifies the physical grounding project as its central research focus.
In this paper, I summarize an emerging debate in the cognitive sciences over the right taxonomy for understanding cognition – the right theory of and vocabulary for describing the structure of the mind – and the proper role of neuroscientific evidence in specifying this taxonomy. In part because the discussion clearly entails a deep reconsideration of the supposed autonomy of psychology from neuroscience, this is a debate in which philosophers should be interested, with which they should be familiar, and to (...) which they should contribute. Here, I outline some of the positions being advocated, and reflect on some of the possible implications of this work both for scientific and folk psychology. (shrink)
To accept that cognition is embodied is to question many of the beliefs traditionally held by cognitive scientists. One key question regards the localization of cognitive faculties. Here we argue that for cognition to be embodied and sometimes embedded, means that the cognitive faculty cannot be localized in a brain area alone. We review recent research on neural reuse, the 1/f structure of human activity, tool use, group cognition, and social coordination dynamics that we believe demonstrates how the boundary between (...) the different areas of the brain, the brain and body, and the body and environment is not only blurred but indeterminate. In turn, we propose that cognition is supported by a nested structure of task-specific synergies, which are softly assembled from a variety of neural, bodily, and environmental components (including other individuals), and exhibit interaction dominant dynamics. (shrink)
Neural reuse is a form of neuroplasticity whereby neural elements originally developed for one purpose are put to multiple uses. A diverse behavioral repertoire is achieved by means of the creation of multiple, nested, and overlapping neural coalitions, in which each neural element is a member of multiple different coalitions and cooperates with a different set of partners at different times. Neural reuse has profound implications for how we think about our continuity with other species, for how we understand the (...) similarities and differences between psychological processes, and for how best to pursue a unified science of the mind.After Phrenology: Neural Reuse and the Interactive Brain surveys the terrain and advocates for a series of reforms in psychology and cognitive neuroscience. The book argues that, among other things, we should capture brain function in a multidimensional manner, develop a new, action-oriented vocabulary for psychology, and recognize that higher-order cognitive processes are built from complex configurations of already evolved circuitry. (shrink)
This essay introduces the massive redeployment hypothesis, an account of the functional organization of the brain that centrally features the fact that brain areas are typically employed to support numerous functions. The central contribution of the essay is to outline a middle course between strict localization on the one hand, and holism on the other, in such a way as to account for the supporting data on both sides of the argument. The massive redeployment hypothesis is supported by case studies (...) of redeployment, and compared and contrasted with other theories of the localization of function. (shrink)
The current essay introduces the guidance theory of representation, according to which the content and intentionality of representations can be accounted for in terms of the way they provide guidance for action. The guidance theory offers a way of fixing representational content that gives the causal and evolutionary history of the subject only an indirect role, and an account of representational error, based on failure of action, that does not rely on any such notions as proper functions, ideal conditions, or (...) normal circumstances. Moreover, because the notion of error is defined in terms of failure of action, the guidance theory meets the “meta-epistemological requirement” that representational error should be potentially detectable by the representing system itself. In this essay, we offer a brief account of the biological origins of representation, a formal characterization of the guidance theory, some examples of its use, and show how the guidance theory handles some traditional problem cases for representation: the representation of fictional and abstract entities. Being both representational and actiongrounded, the guidance theory may provide some common ground between embodied and cognitivist approaches to the study of the mind. (shrink)
Abstract: The massive redeployment hypothesis (MRH) is a theory about the functional topography of the human brain, offering a middle course between strict localization on the one hand, and holism on the other. Central to MRH is the claim that cognitive evolution proceeded in a way analogous to component reuse in software engineering, whereby existing components-originally developed to serve some specific purpose-were used for new purposes and combined to support new capacities, without disrupting their participation in existing programs. If the (...) evolution of cognition was indeed driven by such exaptation, then we should be able to make some specific empirical predictions regarding the resulting functional topography of the brain. This essay discusses three such predictions, and some of the evidence supporting them. Then, using this account as a background, the essay considers the implications of these findings for an account of the functional integration of cognitive operations. For instance, MRH suggests that in order to determine the functional role of a given brain area it is necessary to consider its participation across multiple task categories, and not just focus on one, as has been the typical practice in cognitive neuroscience. This change of methodology will motivate (even perhaps necessitate) the development of a new, domain-neutral vocabulary for characterizing the contribution of individual brain areas to larger functional complexes, and direct particular attention to the question of how these various area roles are integrated and coordinated to result in the observed cognitive effect. Finally, the details of the mix of cognitive functions a given area supports should tell us something interesting not just about the likely computational role of that area, but about the nature of and relations between the cognitive functions themselves. For instance, growing evidence of the role of “motor” areas like M1, SMA and PMC in language processing, and of “language” areas like Broca’s area in motor control, offers the possibility for significantly reconceptualizing the nature both of language and of motor control. (shrink)
This paper lays out some of the empirical evidence for the importance of neural reuse—the reuse of existing (inherited and/or early-developing) neural circuitry for multiple behavioral purposes—in defining the overall functional structure of the brain. We then discuss in some detail one particular instance of such reuse: the involvement of a local neural circuit in finger awareness, number representation, and other diverse functions. Finally, we consider whether and how the notion of a developmental homology can help us understand the relationships (...) between the cognitive functions that develop out of shared neural supports. (shrink)
This paper is a summary and evaluation of work presented at the AAAI 2005 Fall Symposium on Machine Ethics that brought together participants from the fields of Computer Science and Philosophy to the end of clarifying the nature of this newly emerging field and discussing different approaches one could take towards realizing the ultimate goal of creating an ethical machine.
b>. Recent findings in cognitive science suggest that the epistemic subject is more complex and epistemically porous than is generally pictured. Human knowers are open to the world via multiple channels, each operating for particular purposes and according to its own logic. These findings need to be understood and addressed by the philosophical community. The current essay argues that one consequence of the new findings is to invalidate certain arguments for epistemic anti-realism.
One of the most foundational and continually contested questions in the cognitive sciences is the degree to which the functional organization of the brain can be understood as modular. In its classic formulation, a module was defined as a cognitive sub-system with nine specific properties; the classic module is, among other things, domain specific, encapsulated, and implemented in dedicated neural substrates. Most of the examinations—and especially the criticisms—of the modularity thesis have focused on these properties individually, for instance by finding (...) counterexamples in which otherwise good candidates for cognitive modules are shown to lack domain specificity or encapsulation. The current paper goes beyond the usual approach by asking what some of the broad architectural implications of the modularity thesis might be, and attempting to test for these. The evidence does not favor a modular architecture for the cortex. Moreover, the evidence suggests that best way to approach the understanding of cognition is not by analyzing and modelling different functional domains in isolation from the others, but rather by looking for points of overlap in their neural implementations, and exploiting these to guide the analysis and decomposition of the functions in question. This has significant implications for the question of how to approach the design and implementation of intelligent artifacts in general, and language-using robots in particular. (shrink)
Recent trends in the philosophy of mind and cognitive science can be fruitfully characterized as part of the ongoing attempt to come to grips with the very idea of homo sapiens--an intelligent, evolved, biological agent--and its signature contribution is the emergence of a philosophical anthropology which, contra Descartes and his thinking thing, instead puts doing at the center of human being. Applying this agency-oriented line of thinking to the problem of representation, this paper introduces the Guidance Theory, according to which (...) the content and intentionality of representations can be accounted for in terms of the way they provide guidance for action. We offer a brief account of the motivation for the theory, and a formal characterization. (shrink)
This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able to notice when something is amiss, assess the anomaly, and guide a solution into place. This basic strategy of self-guided learning is termed the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. This paper (a) argues that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) (...) details the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describes specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outlines both short-term and long-term research agendas. (shrink)
As part of the ongoing attempt to fully naturalize the concept of human being--and, more specifically, to re-center it around the notion of agency--this essay discusses an approach to defining the content of representations in terms ultimately derived from their central, evolved function of providing guidance for action. This 'guidance theory' of representation is discussed in the context of, and evaluated with respect to, two other biologically inspired theories of representation: Dan Lloyd's dialectical theory of representation and Ruth Millikan's biosemantics.
We agree with Heyes that an explanation of human uniqueness must appeal to cultural evolution, and not just genes. Her account, though, focuses narrowly on internal cognitive mechanisms. This causes her to mischaracterize human behavior and to overlook the role of material culture. A more powerful account would view cognitive gadgets as spanning organisms and their environments.
Maintaining adequate performance in dynamic and uncertain settings has been a perennial stumbling block for intelligent systems. Nevertheless, any system intended for real-world deployment must be able to accommodate unexpected change—that is, it must be perturbation tolerant. We have found that metacognitive monitoring and control—the ability of a system to self-monitor its own decision-making processes and ongoing performance, and to make targeted changes to its beliefs and action-determining components—can play an important role in helping intelligent systems cope with the perturbations (...) that are the inevitable result of real-world deployment. In this article we present the results of several experiments demonstrating the efficacy of metacognition in improving the perturbation tolerance of reinforcement learners, and discuss a general theory of metacognitive monitoring and control, in a form we call the metacognitive loop. (shrink)
Multi-voxel pattern analysis (MVPA) is a popular analytical technique in neuroscience that involves identifying patterns in fMRI BOLD signal data that are predictive of task conditions. But the technique is also frequently used to make inferences about the regions of the brain that are most important to the tasks in question, and our analysis shows that this is a mistake. MVPA does not provide a reliable guide to what information is being used by the brain during cognitive tasks, nor where (...) that information is. This is due in part to inherent run to run variability in the decision space generated by the classifier, but there are also several other issues, discussed below, that make inference from the characteristics of the learned models to relevant brain activity deeply problematic. These issues have significant implications both for many papers already published, and for how the field uses this technique in the future. (shrink)
The posterior cortex, hippocampus, and prefrontal cortex in the Leabra architecture are specialized in terms of various neural parameters, and thus are predilections for learning and processing, but domain-general in terms of cognitive functions such as face recognition. Also, these areas are not encapsulated and violate Fodorian criteria for modularity. Anderson's terminology obscures these important points, but we applaud his overall message.
Because I don’t know what a cultural imaginary is, nor how to put (or find) something in one, I propose instead to provide a brief, general account of what, when we think and write about, and thereby determine, the characteristics of mindedness, the members of my tribe imagine themselves to be doing.
Recent years have seen a resurgence of interest in the use of metacognition in intelligent systems. This essay is part of a small section meant to give interested researchers an overview and sampling of the kinds of work currently being pursued in this broad area. The current essay offers a review of recent research in two main topic areas: the monitoring and control of reasoning and the monitoring and control of learning.
In this essay we respond to some criticisms of the guidance theory of representation offered by Tom Roberts. We argue that although Roberts’ criticisms miss their mark, he raises the important issue of the relationship between affordances and the action-oriented representations proposed by the guidance theory. Affordances play a prominent role in the anti-representationalist accounts offered by theorists of embodied cognition and ecological psychology, and the guidance theory is motivated in part by a desire to respond to the critiques of (...) representationalism offered in such accounts, without giving up entirely on the idea that representations are an important part of the cognitive economy of many animals. Thus, explorations of whether and how such accounts can in fact be related and reconciled potentially offer to shed some light on this ongoing controversy. Although the current essay hardly settles the larger debate, it does suggest that there may be more possibility for agreement than is often supposed. (shrink)
“The physics of representation” aims to define the word “representation” as used in the neurosciences, argue that such representations as described in neuroscience are related to and usefully illuminated by the representations generated by modern neural networks, and establish that these entities are “representations in good standing”. We suggest that Poldrack succeeds in, exposes some tensions between the broad use of the term in neuroscience and the narrower class of entities that he identifies in the end, and between the meaning (...) of “representation” in neuroscience and in psychology in, and fails in. This results in some hard choices: give up on the broad scope of the term in neuroscience or continue to embrace the broad, psychologically inflected sense of the term, and deny the entities generated by neural nets are representations in the relevant sense. (shrink)
Yarkoni correctly recognizes that one reason for psychology's generalizability crisis is the failure to account for variance within experiments. We argue that this problem, and the generalizability crisis broadly, is a necessary consequence of the stimulus-response paradigm widely used in psychology research. We point to another methodology, perturbation experiments, as a remedy that is not vulnerable to the same problems.