Despite their success in describing and predicting cognitive behavior, the plausibility of so-called ‘rational explanations’ is often contested on the grounds of computational intractability. Several cognitive scientists have argued that such intractability is an orthogonal pseudoproblem, however, since rational explanations account for the ‘why’ of cognition but are agnostic about the ‘how’. Their central premise is that humans do not actually perform the rational calculations posited by their models, but only act as if they do. Whether or not the problem (...) of intractability is solved by recourse to ‘as if’ explanations critically depends, inter alia, on the semantics of the ‘as if’ connective. We examine the five most sensible explications in the literature, and conclude that none of them circumvents the problem. As a result, rational ‘as if’ explanations must obey the minimal computational constraint of tractability. (shrink)
Bayesian models are often criticized for postulating computations that are computationally intractable (e.g., NP-hard) and therefore implausibly performed by our resource-bounded minds/brains. Our letter is motivated by the observation that Bayesian modelers have been claiming that they can counter this charge of “intractability” by proposing that Bayesian computations can be tractably approximated. We would like to make the cognitive science community aware of the problematic nature of such claims. We cite mathematical proofs from the computer science literature that show intractable (...) Bayesian computations, such as postulated in existing Bayesian models, cannot be tractably approximated. This does not mean that human brains do not (or cannot) implement the type of algorithms that Bayesian modelers are advancing, but it does mean that proposing that they do by itself does nothing to parry the charge of intractability, because the postulated algorithms are as intractable (i.e., require exponential time) as the computations they try to approximate. Besides our negative message for the community, our letter also makes a positive contribution by referring to a methodology that Bayesian modelers can use to try and parry the charge of intractability in a mathematically sound way. (shrink)
Intractability is a growing concern across the cognitive sciences: while many models of cognition can describe and predict human behavior in the lab, it remains unclear how these models can scale to situations of real-world complexity. Cognition and Intractability is the first book to provide an accessible introduction to computational complexity analysis and its application to questions of intractability in cognitive science. Covering both classical and parameterized complexity analysis, it introduces the mathematical concepts and proof techniques that can be used (...) to test one's intuition of tractability. It also describes how these tools can be applied to cognitive modeling to deal with intractability, and its ramifications, in a systematic way. Aimed at students and researchers in philosophy, cognitive neuroscience, psychology, artificial intelligence, and linguistics who want to build a firm understanding of intractability and its implications in their modeling work, it is an ideal resource for teaching or self-study. (shrink)
Single cell recordings in monkeys provide strong evidence for an important role of the motor system in action understanding. This evidence is backed up by data from studies of the (human) mirror neuron system using neuroimaging or TMS techniques, and behavioral experiments. Although the data acquired from single cell recordings are generally considered to be robust, several debates have shown that the interpretation of these data is far from straightforward. We will show that research based on single-cell recordings allows for (...) unlimited content attribution to mirror neurons. We will argue that a theoretical analysis of the mirroring process, combined with behavioral and brain studies, can provide the necessary limitations. A complexity analysis of the type of processing attributed to the mirror neuron system can help formulate restrictions on what mirroring is and what cognitive functions could, in principle, be explained by a mirror mechanism. We argue that processing at higher levels of abstraction needs assistance of non-mirroring processes to such an extent that subsuming the processes needed to infer goals from actions under the label ?mirroring? is not warranted. (shrink)
Advancement in cognitive science depends, in part, on doing some occasional ‘theoretical housekeeping’. We highlight some conceptual confusions lurking in an important attempt at explaining the human capacity for rational or coherent thought: Thagard & Verbeurgt’s computational-level model of humans’ capacity for making reasonable and truth-conducive abductive inferences (1998; Thagard, 2000). Thagard & Verbeurgt’s model assumes that humans make such inferences by computing a coherence function (f_coh), which takes as input representation networks and their pair-wise constraints and gives as output (...) a partition into accepted (A) and rejected (R) elements that maximizes the weight of satisfied constraints. We argue that their proposal gives rise to at least three difficult problems. (shrink)
Many compelling examples have recently been provided in which people can achieve impressive epistemic success, e.g. draw highly accurate inferences, by using simple heuristics and very little information. This is possible by taking advantage of the features of the environment. The examples suggest an easy and appealing naturalization of rationality: on the one hand, people clearly can apply simple heuristics, and on the other hand, they intuitively ought do so when this brings them high accuracy at little cost.. The ‘ought-can’ (...) principle is satisfied, and rationality is meaningfully normative. We show, however, that this naturalization program is endangered by a computational wrinkle in the adaptation process taken to be responsible for this heuristics-based rationality: for the adaptation process to guarantee even minimal rationality, it requires astronomical computational resources, making the problem intractable. We consider various plausible auxiliary assumptions in attempt to remove this obstacle, and show that they do not succeed; intractability is a robust property of adaptation. We discuss the implications of our findings for the project of naturalizing rationality. (shrink)
Four articles in this issue of topiCS (volume 4, issue 1) argue against a computational approach in cognitive science in favor of a dynamical approach. I concur that the computational approach faces some considerable explanatory challenges. Yet the dynamicists’ proposal that cognition is self-organized seems to only go so far in addressing these challenges. Take, for instance, the hypothesis that cognitive behavior emerges when brain and body (re-)configure to satisfy task and environmental constraints. It is known that for certain systems (...) of constraints, no procedure can exist (whether modular, local, centralized, or self-organized) that reliably finds the right configuration in a realistic amount of time. Hence, the dynamical approach still faces the challenge of explaining how self-organized constraint satisfaction can be achieved by human brains and bodies in real time. In this commentary, I propose a methodology that dynamicists can use to try to address this challenge. (shrink)
Theory of mind refers to the human capacity for reasoning about others’ mental states based on observations of their actions and unfolding events. This type of reasoning is notorious in the cognitive science literature for its presumed computational intractability. A possible reason could be that it may involve higher-order thinking. To investigate this we formalize theory of mind reasoning as updating of beliefs about beliefs using dynamic epistemic logic, as this formalism allows to parameterize ‘order of thinking.’ We prove that (...) theory of mind reasoning, so formalized, indeed is intractable. Using parameterized complexity we prove, however, that the ‘order parameter’ is not a source of intractability. We furthermore consider a set of alternative parameters and investigate which of them are sources of intractability. We discuss the implications of these results for the understanding of theory of mind. (shrink)
People cannot understand intentions behind observed actions by direct simulation, because goal inference is highly context dependent. Context dependency is a major source of computational intractability in traditional information-processing models. An embodied embedded view of cognition may be able to overcome this problem, but then the problem needs recognition and explication within the context of the new, layered cognitive architecture.
Barbey & Sloman (B&S) advocate a dual-process (two-system) approach by comparing it with an alternative perspective (ecological rationality), claiming that the latter is unwarranted. Rejecting this alternative approach cannot serve as sufficient evidence for the viability of the former.