Where is imagination in imaginative resistance? We seek to answer this question by connecting two ongoing lines of inquiry in different subfields of philosophy. In philosophy of mind, philosophers have been trying to understand imaginative attitudes’ place in cognitivearchitecture. In aesthetics, philosophers have been trying to understand the phenomenon of imaginative resistance. By connecting these two lines of inquiry, we hope to find mutual illumination of an attitude (or cluster of attitudes) and a phenomenon that have vexed (...) philosophers. Our strategy is to reorient the imaginative resistance literature from the perspective of cognitivearchitecture. Whereas existing taxonomies of positions in the imaginative resistance literature have focused on disagreements over the source and scope of the phenomenon, our taxonomy focuses on the psychological components necessary for explaining imaginative resistance. (shrink)
This paper explores the difference between Connectionist proposals for cognitive a r c h i t e c t u r e a n d t h e s o r t s o f m o d e l s t hat have traditionally been assum e d i n c o g n i t i v e s c i e n c e . W e c l a i m t h a t t (...) h e m a j o r d i s t i n c t i o n i s t h a t , w h i l e b o t h Connectionist and Classical architectures postulate representational mental states, the latter but not the former are committed to a symbol-level of representation, or to a ‘language of thought’: i.e., to representational states that have combinatorial syntactic and semantic structure. Several arguments for combinatorial structure in mental representations are then reviewed. These include arguments based on the ‘systematicity’ of mental representation: i.e., on the fact that cognitive capacities always exhibit certain symmetries, so that the ability to entertain a given thought implies the ability to entertain thoughts with semantically related contents. We claim that such arguments make a powerful case that mind/brain architecture is not Connectionist at the cognitive level. We then consider the possibility that Connectionism may provide an account of the neural (or ‘abstract neurological’) structures in which Classical cognitivearchitecture is implemented. We survey a n u m b e r o f t h e s t a n d a r d a r g u m e n t s t h a t h a v e b e e n o f f e r e d i n f a v o r o f Connectionism, and conclude that they are coherent only on this interpretation. (shrink)
In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge. We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build arti cial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of (...) the comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the fi nal part of the paper further directions of research will be explored, trying to address current limitations and future challenges. (shrink)
This paper proposes a brain-inspired cognitivearchitecture that incorporates approximations to the concepts of consciousness, imagination, and emotion. To emulate the empirically established cognitive efficacy of conscious as opposed to non-conscious information processing in the mammalian brain, the architecture adopts a model of information flow from global workspace theory. Cognitive functions such as anticipation and planning are realised through internal simulation of interaction with the environment. Action selection, in both actual and internally simulated interaction with (...) the environment, is mediated by affect. An implementation of the architecture is described which is based on weightless neurons and is used to control a simulated robot. (shrink)
During the last decades, many cognitive architectures (CAs) have been realized adopting different assumptions about the organization and the representation of their knowledge level. Some of them (e.g. SOAR ) adopt a classical symbolic approach, some (e.g. LEABRA[ 48]) are based on a purely connectionist model, while others (e.g. CLARION ) adopt a hybrid approach combining connectionist and symbolic representational levels. Additionally, some attempts (e.g. biSOAR) trying to extend the representational capacities of CAs by integrating diagrammatical representations and reasoning (...) are also available . In this paper we propose a reflection on the role that Conceptual Spaces, a framework developed by Peter G¨ardenfors  more than fifteen years ago, can play in the current development of the Knowledge Level in Cognitive Systems and Architectures. In particular, we claim that Conceptual Spaces offer a lingua franca that allows to unify and generalize many aspects of the symbolic, sub-symbolic and diagrammatic approaches (by overcoming some of their typical problems) and to integrate them on a common ground. In doing so we extend and detail some of the arguments explored by G¨ardenfors  for defending the need of a conceptual, intermediate, representation level between the symbolic and the sub-symbolic one. In particular we focus on the advantages offered by Conceptual Spaces (w.r.t. symbolic and sub-symbolic approaches) in dealing with the problem of compositionality of representations based on typicality traits. Additionally, we argue that Conceptual Spaces could offer a unifying framework for interpreting many kinds of diagrammatic and analogical representations. As a consequence, their adoption could also favor the integration of diagrammatical representation and reasoning in CAs. (shrink)
Cognitive architectures - task-general theories of the structure and function of the complete cognitive system - are sometimes argued to be more akin to frameworks or belief systems than scientific theories. The argument stems from the apparent non-falsifiability of existing cognitive architectures. Newell was aware of this criticism and argued that architectures should be viewed not as theories subject to Popperian falsification, but rather as Lakatosian research programs based on cumulative growth. Newell's argument is undermined because he (...) failed to demonstrate that the development of Soar, his own candidate architecture, adhered to Lakatosian principles. This paper presents detailed case studies of the development of two cognitive architectures, Soar and ACT-R, from a Lakatosian perspective. It is demonstrated that both are broadly Lakatosian, but that in both cases there have been theoretical progressions that, according to Lakatosian criteria, are pseudo-scientific. Thus, Newell's defense of Soar as a scientific rather than pseudo-scientific theory is not supported in practice. The ACT series of architectures has fewer pseudo-scientific progressions than Soar, but it too is vulnerable to accusations of pseudo-science. From this analysis, it is argued that successive versions of theories of the human cognitivearchitecture must explicitly address five questions to maintain scientific credibility. (shrink)
Diagrams are a form of spatial representation that supports reasoning and problem solving. Even when diagrams are external, not to mention when there are no external representations, problem solving often calls for internal representations, that is, representations in cognition, of diagrammatic elements and internal perceptions on them. General cognitive architectures—Soar and ACT-R, to name the most prominent—do not have representations and operations to support diagrammatic reasoning. In this article, we examine some requirements for such internal representations and processes in (...)cognitive architectures. We discuss the degree to which DRS, our earlier proposal for such an internal representation for diagrams, meets these requirements. In DRS, the diagrams are not raw images, but a composition of objects that can be individuated and thus symbolized, while, unlike traditional symbols, the referent of the symbol is an object that retains its perceptual essence, namely, its spatiality. This duality provides a way to resolve what anti-imagists thought was a contradiction in mental imagery: the compositionality of mental images that seemed to be unique to symbol systems, and their support of a perceptual experience of images and some types of perception on them. We briefly review the use of DRS to augment Soar and ACT-R with a diagrammatic representation component. We identify issues for further research. (shrink)
In cognitive science, the concept of dissociation has been central to the functional individuation and decomposition of cognitive systems. Setting aside debates about the legitimacy of inferring the existence of dissociable systems from ‘behavioural’ dissociation data, the main idea behind the dissociation approach is that two cognitive systems are dissociable, and thus viewed as distinct, if each can be damaged, or impaired, without affecting the other system’s functions. In this article, I propose a notion of functional independence (...) that does not require dissociability, and describe an approach to the functional decomposition and modelling of cognitive systems that complements the dissociation approach. I show that highly integrated cognitive and neurocognitive systems can be decomposed into non-dissociable but functionally independent components, and argue that this approach can provide a general account of cognitive specialization in terms of a stable structure–function relationship. 1 Introduction2 Functional Independence without Dissociability3 FI Systems and Cognitive Architecture4 FI Systems and Cognitive Specialization. (shrink)
The central contention of The Implicit Mind is that understanding the two faces of spontaneity-its virtues and vices-requires understanding the "implicit mind." In turn, Michael Brownstein maintains that understanding the implicit mind requires the consideration of three sets of questions. First, what are implicit mental states? What kind of cognitive structure do they have? Second, how should we relate to our implicit attitudes? Are we responsible for them? Third, how can we improve the ethics of our implicit minds?
Two long-standing arguments in cognitive science invoke the assumption that holistic inference is computationally infeasible. The first is Fodor’s skeptical argument toward computational modeling of ordinary inductive reasoning. The second advocates modular computational mechanisms of the kind posited by Cosmides, Tooby and Sperber. Based on advances in machine learning related to Bayes nets, as well as investigations into the structure of scientific and ordinary information, I maintain neither argument establishes its architectural conclusion. Similar considerations also undermine Fodor’s decades-long diagnosis (...) of artificial intelligence research as confounded by an inability to circumscribe the amount of information relevant to inferential processes. This diagnosis is particularly inapposite with respect to Bayes nets, since one of their strengths as machine learning systems has been their capacity to reason probabilistically about large data sets whose size overwhelms the capacities of individual human reasoners. A general moral follows from these criticisms: Insights into artificial and human cognitive systems are likely to be cultivated by focusing greater attention on the structure and density of connections among items of information that are available to them. (shrink)
As artificial intelligence thrives and propagates through modern life, a key question to ask is how to include humans in future AI? Despite human involvement at every stage of the production process from conception and design through to implementation, modern AI is still often criticized for its “black box” characteristics. Sometimes, we do not know what really goes on inside or how and why certain conclusions are met. Future AI will face many dilemmas and ethical issues unforeseen by their creators (...) beyond those commonly discussed and to which solutions cannot be hard-coded and are often still up for debate. Given the sensitivity of such social and ethical dilemmas and the implications of these for human society at large, when and if our AI make the “wrong” choice we need to understand how they got there in order to make corrections and prevent recurrences. This is particularly true in situations where human livelihoods are at stake or when major individual or household decisions are taken. Doing so requires opening up the “black box” of AI; especially as they act, interact, and adapt in a human world and how they interact with other AI in this world. In this article, we argue for the application of cognitive architectures for ethical AI. In particular, for their potential contributions to AI transparency, explainability, and accountability. We need to understand how our AI get to the solutions they do, and we should seek to do this on a deeper level in terms of the machine-equivalents of motivations, attitudes, values, and so on. The path to future AI is long and winding but it could arrive faster than we think. In order to harness the positive potential outcomes of AI for humans and society, we need to understand AI more fully in the first place and we expect this will simultaneously contribute towards greater understanding of their human counterparts also. (shrink)
The dynamic approach to understanding of the human consciousness, its cognitive activities and cognitivearchitecture is one of the most promising approaches in the modern epistemology and cognitive science. The conception of embodied mind is under discussion in the light of nonlinear dynamics and of the idea co-evolution of complex systems developed by the Moscow scientific school. The cognitivearchitecture of the embodied mind is rather complex: data from senses and products of rational thinking, (...) the verbal and the pictorial, logic and intuition, the analytical and synthetic abilities of perception and of thinking, the local and the global, the analogue and the digital, the archaic and the post-modern are intertwined in it. In the process of cognition, co-evolution of embodied mind as an autopoietic system and its surroundings takes place. The perceptual and mental processes are bound up with the structure of human body. Nonlinear and circular connecting links between the subject of cognition and the world constructed by him can be metaphorically called a nonlinear cobweb of cognition. Cognition is an autopoietic activity because it is directed to the search of elements that are missed; it serves to completing integral structures. According to the theory of blow-up regimes in complex systems elaborated by Sergey P.Kudyumov and his followers, the idea of co-evolution is connected with the concept of tempoworlds. To co-evolve means to start to develop in one and the same tempoworld and to use the possibility – in case of a proper intergation into a whole structure – to accelerate the tempo of evolution. The cognitive activities of the human being can be considered as a movement (active walk) in landscapes of co-evolution when he cognizes and changes environment and is changed himself by the very activities. The similar conclusion can be drawn from Francisco Varela’s conception of enactive cognition. (shrink)
This article addresses issues in developing cognitive architectures--generic computational models of cognition. Cognitive architectures are believed to be essential in advancing understanding of the mind, and therefore, developing cognitive architectures is an extremely important enterprise in cognitive science. The article proposes a set of essential desiderata for developing cognitive architectures. It then moves on to discuss in detail some of these desiderata and their associated concepts and ideas relevant to developing better cognitive architectures. It (...) argues for the importance of taking into full consideration these desiderata in developing future architectures that are more cognitively and ecologically realistic. A brief and preliminary evaluation of existing cognitive architectures is attempted on the basis of these ideas. (shrink)
"CognitiveArchitecture" asks how evolving modalities--from bio-politics to "noo-politics"--can be mapped upon the city under contemporary conditions of urbanization and globalization. Noo-politics, most broadly understood as the power exerted over the life of the mind, reconfigures perception, memory and attention, and also implicates potential ways and means by which neurobiological architecture is undergoing reconfiguration. This volume, motivated by theories such as 'cognitive capitalism' and concepts such as 'neural plasticity, ' shows how architecture and urban processes (...) and products commingle to form complex systems that produce novel forms of networks that empower the imagination and constitute the cultural landscape. This volume rethinks the relations between form and forms of communication, calling for a new logic of representation; it examines the manner in which information, with its non-hierarchical and distributed format is contributing both to the sculpting of brain and production of mind. "CognitiveArchitecture" brings together renowned specialists in the areas of political and aesthetic philosophy, neuroscience, socio-cultural and architecture theory, visual and spatial theorists and practitioners. (shrink)
In this paper we compare two theories about the cognitivearchitecture underlying morality. One theory, proposed by Sripada and Stich (forthcoming), posits an interlocking set of innate mechanisms that internalize moral norms from the surrounding community and generate intrinsic motivation to comply with these norms and to punish violators. The other theory, which we call the M/C model was suggested by the widely discussed and influential work of Elliott Turiel, Larry Nucci and others on the “moral/conventional task”. This (...) theory posits two distinct mental domains, the moral and the conventional, each of which gives rise to a characteristic suite of judgments about rules in that domain and about transgressions of those rules. We give an overview of both theories and of the data each was designed to explain. We go on to consider a growing body of evidence that suggests the M/C model is mistaken. That same evidence, however, is consistent with the Sripada and Stich theory. Thus, we conclude that the M/C model does not pose a serious challenge for the Sripada and Stich theory. (shrink)
The aim of this paper is to understand the functional role of mental representations and intentionality in skilled actions from a systems related perspective. Therefore, we will evaluate the function of representation and then discuss the cognitivearchitecture of skilled actions in more depth. We are going to describe the building blocks and levels of the action system that enable us to control movements such as striking the tennis ball at the right time, or grasping tools in manual (...) action. Based on this theoretical understanding the measurement of mental representations and related research results concerning mental representation in skilled action are presented in an overview. This leads to the question how mental representations develop and change during learning. Finally, to consolidate the functional understanding of mental representation in skilled action and interaction, we provide examples how to use the measurement of mental representation in humans to inform technical systems. (shrink)
The criterion of computational universality for an architecture should be replaced by the notion of compliancy, where a model built within an architecture is compliant to the extent that the model allows the architecture to determine the processing. The test should be that the architecture does easily – that is, enables a compliant model to do – what people do easily.
Sober and Wilson have propose a cluster of arguments for the conclusion that “natural selection is unlikely to have given us purely egoistic motives” and thus that psychological altruism is true. I maintain that none of these arguments is convincing. However, the most powerful of their arguments raises deep issues about what egoists and altruists are claiming and about the assumptions they make concerning the cognitivearchitecture underlying human motivation.
It has been argued that dual process theories are not consistent with Oaksford and Chater’s probabilistic approach to human reasoning (Oaksford and Chater in Psychol Rev 101:608–631, 1994 , 2007 ; Oaksford et al. 2000 ), which has been characterised as a “single-level probabilistic treatment[s]” (Evans 2007 ). In this paper, it is argued that this characterisation conflates levels of computational explanation. The probabilistic approach is a computational level theory which is consistent with theories of general cognitivearchitecture (...) that invoke a WM system and an LTM system. That is, it is a single function dual process theory which is consistent with dual process theories like Evans’ ( 2007 ) that use probability logic (Adams 1998 ) as an account of analytic processes. This approach contrasts with dual process theories which propose an analytic system that respects standard binary truth functional logic (Heit and Rotello in J Exp Psychol Learn 36:805–812, 2010 ; Klauer et al. in J Exp Psychol Learn 36:298–323, 2010 ; Rips in Psychol Sci 12:29–134, 2001 , 2002 ; Stanovich in Behav Brain Sci 23:645–726, 2000 , 2011 ). The problems noted for this latter approach by both Evans Psychol Bull 128:978–996, ( 2002 , 2007 ) and Oaksford and Chater (Mind Lang 6:1–38, 1991 , 1998 , 2007 ) due to the defeasibility of everyday reasoning are rehearsed. Oaksford and Chater’s ( 2010 ) dual systems implementation of their probabilistic approach is then outlined and its implications discussed. In particular, the nature of cognitive decoupling operations are discussed and a Panglossian probabilistic position developed that can explain both modal and non-modal responses and correlations with IQ in reasoning tasks. It is concluded that a single function probabilistic approach is as compatible with the evidence supporting a dual systems theory. (shrink)
As we know, a cognitivearchitecture is a domain-generic computational cognitive model that may be used for a broad analysis of cognition and behavior. Cognitive architectures embody theories of cognition in computer algorithms and programs. Social simulation with multi-agent systems can benefit from incorporating cognitive architectures, as they provide a realistic basis for modeling individual agents (as argued in Sun 2001). In this survey, an example cognitivearchitecture will be given, and its application (...) to social simulation will be sketched. (shrink)
Recent work in cognitive neuroscience on the child's Theory of Mind has pursued the idea that the ability to metarepresent mental states depends on a domain-specific cognitive subystem implemented in specific neural circuitry: a Theory of Mind Module. We argue that the interaction of several domain-general mechanisms and lower-level domain-specific mechanisms accounts for the flexibility and sophistication of behavior, which has been taken to be evidence for a domain-specific ToM module. This finding is of more general interest since (...) it suggests a parsimonious cognitivearchitecture can account for apparent domain specificity. We argue for such an architecture in two stages. First, on conceptual grounds, contrasting the case of language with ToM, and second, by showing that recent evidence in the form of fMRI and lesion studies supports the more parsimonious hypothesis. Theory of Mind, Metarepresentation, and Modularity Developmental Components of ToM The Analogy with Modularity of Language Dissociations without Modules The Evidence from Neuroscience Conclusion. (shrink)
In this paper, I reinterpret Kant’s Transcendental Analytic as a description of a cognitivearchitecture. I describe a computer implementation of this architecture, and show how it has been applied to two unsupervised learning tasks. The resulting program is very data efficient, able to learn from a tiny handful of examples. I show how the program achieves data-efficiency: the constraints described in the Analytic of Principles are reinterpreted as strong prior knowledge, constraining the set of possible solutions.
It has recently been argued that the success of the connectionist program in cognitive science would threaten folk psychology. I articulate and defend a "minimalist" construal of folk psychology that comports well with empirical evidence on the folk understanding of belief and is compatible with even the most radical developments in cognitive science.
The novel approach presented in this paper accounts for the occurrence of the epistemic gap and defends physicalism against anti-physicalist arguments without relying on so-called phenomenal concepts. Instead of concentrating on conceptual features, the focus is shifted to the special characteristics of experiences themselves. To this extent, the account provided is an alternative to the Phenomenal Concept Strategy. It is argued that certain sensory representations, as accessed by higher cognition, lack constituent structure. Unstructured representations could freely exchange their causal roles (...) within a given system which entails their functional unanalysability. These features together with the encapsulated nature of low level complex processes giving rise to unstructured sensory representations readily explain those peculiarities of phenomenal consciousness which are usually taken to pose a serious problem for contemporary physicalism. I conclude that if those concepts which are related to the phenomenal character of conscious experience are special in any way, their characteristics are derivative of and can be accounted for in terms of the cognitive and representational features introduced in the present paper. (shrink)
Recent theorists suggest that our capacity to respond affectively to fictions depends on our ability to engage in simulation: either simulating a character in the fiction, or simulating someone reading or watching the fiction as though it were fact. We argue that such accounts are quite successful at accounting for many of the basic explananda of our affective engagements in fiction. Nonetheless, we argue further that simulationist accounts ultimately fail, for simulation involves an ineliminably ego-centred element that is atypical of (...) our experience of fiction. We then draw on recent work in philosophical psychology to articulate a more psychologically plausible account of our emotional engagement with fiction. (shrink)
Research in computational cognitive modeling investigates the nature of cognition through developing process-based understanding by specifying computational models of mechanisms (including representations) and processes. In this enterprise, a cognitivearchitecture is a domaingeneric computational cognitive model that may be used for a broad, multiple-level, multipledomain analysis of behavior. It embodies generic descriptions of cognition in computer algorithms and programs. Developing cognitive architectures is a difficult but important task. In this article, discussions of issues and challenges (...) in developing cognitive architectures will be undertaken, and an example cognitivearchitecture (CLARION) will be described. (shrink)
Cognitive architectures, like programming languages, make commitments only at the implementation level and have limited explanatory power. Their universality implies that it is hard, if not impossible, to justify them in detail from finite quantities of data. It is more fruitful to focus on particular tasks such as language understanding and propose testable theories at the computational and algorithmic levels.
In recent attempts to characterize the cognitive mechanisms underlying altruistic motivation, one central question is the extent to which the capacity for altruism depends on the capacity for understanding other minds, or ‘mindreading’. Some theorists maintain that the capacity for altruism is independent of any capacity for mindreading; others maintain that the capacity for altruism depends on fairly sophisticated mindreading skills. I argue that none of the prevailing accounts is adequate. Rather, I argue that altruistic motivation depends on a (...) basic affective system, a ‘Concern Mechanism’, which requires only a minimal capacity for mindreading. (shrink)
Quantum probability (QP) theory provides an alternative account of empirical phenomena in decision making that classical probability (CP) theory cannot explain. Cognitive architectures combine probabilistic mechanisms with symbolic knowledge-based representations (e.g., heuristics) to address effects that motivate QP. They provide simple and natural explanations of these phenomena based on general cognitive processes such as memory retrieval, similarity-based partial matching, and associative learning.
Some controversies in cognitive science, such as arguments about whether classical or distributed connectionist architectures best model the human cognitive system, reenact long-standing debates in the philosophy of science. For millennia philosophers have pondered whether mentality can submit to scientific explanation generally and to physical explanation particularly. Recently, positive answers have gained popularity. The question remains, though, as to the analytical level at which mentality is best explained. Is there a level of analysis that is peculiarly appropriate for (...) the explanation of either consciousness or mental contents? Are human consciousness, cognition, and conduct best understood in terms of talk about neurons and networks or schemas and scripts or intentions and inferences? If our best accounts make no appeal to our hopes or beliefs or desires, how do we square those views with our conception of ourselves as rational beings? Moreover, can models of physical processes explain our mental lives? Does mentality require a special level of rational or cognitive explanation or is it best understood in terms of overall brain functioning or neuronal or molecular or even quantum activities--or any of a dozen levels of physical explanation in between? Also, regardless of how they compare with explanations cast at physical levels, what is the status of psychological explanations that appeal fundamentally to mental contents? As a means for beginning to address such questions, proposals about cognitivearchitecture concern which kind of explanation best characterizes primitive psychological activities. Although, technically, approaches to modeling those activities are unlimited, two 1 strategies have enjoyed most of the attention. The prominence of the classical account and the distributed connectionist (or parallel distributed processing (PDP)) account, notwithstanding, nothing bars the development of additional proposals. Classicism employs rules that apply to symbolic representations to explain cognitive processing.. (shrink)
The view that moral cognition is subserved by a two-tieredarchitecture is defended: Moral reasoning is the result both ofspecialized, informationally encapsulated modules which automaticallyand effortlessly generate intuitions; and of general-purpose,cognitively penetrable mechanisms which enable moral judgment in thelight of the agent's general fund of knowledge. This view is contrastedwith rival architectures of social/moral cognition, such as Cosmidesand Tooby's view that the mind is wholly modular, and it is argued thata two-tiered architecture is more plausible.
The representational nature of human cognition and thought in general has been a source of controversies. This is particularly so in the context of studies of unconscious cognition, in which representations tend to be ontologically and structurally segregated with regard to their conscious status. However, it appears evolutionarily and developmentally unwarranted to posit such segregations, as,otherwise, artifact structures and ontologies must be concocted to explain them from the viewpoint of the human cognitivearchitecture. Here, from a by-and-large Classical (...) cognitivist viewpoint, I show why this segregation is wrong, and elaborate on the need to postulate an ontological and structural continuity between unconscious and conscious representations. Specifically, I hypothesize that this continuity is to be found in the symbolic-based interplay between the syntax and the semantics of thought, and I propose a model of human information processing characterized by the integration of syntactic and semantic representations. (shrink)
Putting forward an original analysis of perceiving as a cognitive attitude, as it contrasts with judging, believing and knowing, the author approaches several issues in the philosophy of perception, such as differences between presentation and representation, the natures of concepts and categorization, the justification of perceptual beliefs and their role in the justification of knowledge. His approach is influenced by phenomenology and by psychology and neuroscience of vision.
Cognitive architectures are theories of cognition that try to capture the essential representations and mechanisms that underlie cognition. Research in cognitive architectures has gradually moved from a focus on the functional capabilities of architectures to the ability to model the details of human behavior, and, more recently, brain activity. Although there are many different architectures, they share many identical or similar mechanisms, permitting possible future convergence. In judging the quality of a particular cognitive model, it is pertinent (...) to not just judge its fit to the experimental data but also its simplicity and ability to make predictions. (shrink)