Classical (Bayesian) probability (CP) theory has led to an influential research tradition for modeling cognitive processes. Cognitive scientists have been trained to work with CP principles for so long that it is hard even to imagine alternative ways to formalize probabilities. However, in physics, quantum probability (QP) theory has been the dominant probabilistic approach for nearly 100 years. Could QP theory provide us with any advantages in cognitive modeling as well? Note first that both CP and QP theory share the (...) fundamental assumption that it is possible to model cognition on the basis of formal, probabilistic principles. But why consider a QP approach? The answers are that (1) there are many well-established empirical findings (e.g., from the influential Tversky, Kahneman research tradition) that are hard to reconcile with CP principles; and (2) these same findings have natural and straightforward explanations with quantum principles. In QP theory, probabilistic assessment is often strongly context- and order-dependent, individual states can be superposition states (that are impossible to associate with specific values), and composite systems can be entangled (they cannot be decomposed into their subsystems). All these characteristics appear perplexing from a classical perspective. However, our thesis is that they provide a more accurate and powerful account of certain cognitive processes. We first introduce QP theory and illustrate its application with psychological examples. We then review empirical findings that motivate the use of quantum theory in cognitive theory, but also discuss ways in which QP and CP theories converge. Finally, we consider the implications of a QP theory approach to cognition for human rationality. (shrink)
Quantum cognition research applies abstract, mathematical principles of quantum theory to inquiries in cognitive science. It differs fundamentally from alternative speculations about quantum brain processes. This topic presents new developments within this research program. In the introduction to this topic, we try to answer three questions: Why apply quantum concepts to human cognition? How is quantum cognitive modeling different from traditional cognitive modeling? What cognitive processes have been modeled using a quantum account? In addition, a brief introduction to quantum probability (...) theory and a concrete example is provided to illustrate how a quantum cognitive model can be developed to explain paradoxical empirical findings in psychological literature. (shrink)
The term “vagueness” describes a property of natural concepts, which normally have fuzzy boundaries, admit borderline cases, and are susceptible to Zeno's sorites paradox. We will discuss the psychology of vagueness, especially experiments investigating the judgment of borderline cases and contradictions. In the theoretical part, we will propose a probabilistic model that describes the quantitative characteristics of the experimental finding and extends Alxatib's and Pelletier's () theoretical analysis. The model is based on a Hopfield network for predicting truth values. Powerful (...) as this classical perspective is, we show that it falls short of providing an adequate coverage of the relevant empirical results. In the final part, we will argue that a substantial modification of the analysis put forward by Alxatib and Pelletier and its probabilistic pendant is needed. The proposed modification replaces the standard notion of probabilities by quantum probabilities. The crucial phenomenon of borderline contradictions can be explained then as a quantum interference phenomenon. (shrink)
The distinction between rules and similarity is central to our understanding of much of cognitive psychology. Two aspects of existing research have motivated the present work. First, in different cognitive psychology areas we typically see different conceptions of rules and similarity; for example, rules in language appear to be of a different kind compared to rules in categorization. Second, rules processes are typically modeled as separate from similarity ones; for example, in a learning experiment, rules and similarity influences would be (...) described on the basis of separate models. In the present article, I assume that the rules versus similarity distinction can be understood in the same way in learning, reasoning, categorization, and language, and that a unified model for rules and similarity is appropriate. A rules process is considered to be a similarity one where only a single or a small subset of an object's properties are involved. Hence, rules and overall similarity operations are extremes in a single continuum of similarity operations. It is argued that this viewpoint allows adequate coverage of theory and empirical findings in learning, reasoning, categorization, and language, and also a reassessment of the objectives in research on rules versus similarity. Key Words: categorization; cognitive explanation; language; learning; reasoning; rules; similarity. (shrink)
We seek to understand rational decision making and if it exists whether finite agents may be able to achieve its principles. This aim has been a singular objective throughout much of human science and philosophy, with early discussions identified since antiquity. More recently, there has been a thriving debate based on differing perspectives on rationality, including adaptive heuristics, Bayesian theory, quantum theory, resource rationality, and probabilistic language of thought. Are these perspectives on rationality mutually exclusive? Are they all needed? Do (...) they undermine an aim to have rational standards in decision situations like politics, medicine, legal proceedings, and others, where there is an expectation and need for decision making as close to “optimal” as possible? This special issue brings together representative contributions from the currently predominant views on rationality, with a view to evaluate progress on these and related questions. (shrink)
The attempt to employ quantum principles for modeling cognition has enabled the introduction of several new concepts in psychology, such as the uncertainty principle, incompatibility, entanglement, and superposition. For many commentators, this is an exciting opportunity to question existing formal frameworks (notably classical probability theory) and explore what is to be gained by employing these novel conceptual tools. This is not to say that major empirical challenges are not there. For example, can we definitely prove the necessity for quantum, as (...) opposed to classical, models? Can the distinction between compatibility and incompatibility inform our understanding of differences between human and nonhuman cognition? Are quantum models less constrained than classical ones? Does incompatibility arise as a limitation, to avoid the requirements from the principle of unicity, or is it an inherent (or essential?) characteristic of intelligent thought? For everyday judgments, do quantum principles allow more accurate prediction than classical ones? Some questions can be confidently addressed within existing quantum models. A definitive resolution of others will have to anticipate further work. What is clear is that the consideration of quantum cognitive models has enabled a new focus on a range of debates about fundamental aspects of cognition. (shrink)
When constrained by limited resources, how do we choose axioms of rationality? The target article relies on Bayesian reasoning that encounter serioustractabilityproblems. We propose another axiomatic foundation: quantum probability theory, which provides for less complex and more comprehensive descriptions. More generally, defining rationality in terms of axiomatic systems misses a key issue: rationality must be defined by humans facing vague information.
Critical (necessary or sufficient) features in categorisation have a long history, but the empirical evidence makes their existence questionable. Nevertheless, there are some cases that suggest critical feature effects. The purpose of the present work is to offer some insight into why classification decisions might misleadingly appear as if they involve critical features. Utilising Tversky's (1977) contrast model of similarity, we suggest that when an object has a sparser representation, changing any of its features is more likely to lead to (...) a change in identity than it would in objects that have richer representations. Experiment 1 provides a basic test of this suggestion with artificial stimuli, whereby objects with a rich or a sparse representation were transformed by changing one of their features. As expected, we observed more identity judgements in the former case. Experiment 2 further confirms our hypothesis, with realistic stimuli, by assuming that superordinate categories have sparser representations than subordinate ones. These results offer some insight into the way feature changes may or may not lead to identity changes in classification decisions. (shrink)
ABSTRACTRecent research on moral dynamics shows that an individual's ethical mind-set moderates the impact of an initial ethical or unethical act on the likelihood of behaving ethically on a subsequent occasion. More specifically, an outcome-based mind-set facilitates Moral Balancing, whereas a rule-based mind-set facilitates Moral Consistency. The objective was to look at the evolution of moral choice across a series of scenarios, that is, to explore if these moral patterns are maintained over time. The results of three studies showed that (...) Moral Balancing is not.. (shrink)
Understanding cognitive processes with a formal framework necessitates some limited, internal prescriptive normativism. This is because it is not possible to endorse the psychological relevance of some axioms in a formal framework, but reject that of others. The empirical challenge then becomes identifying the remit of different formal frameworks, an objective consistent with the descriptivism Elqayam & Evans (E&E) advocate.
This response to the open peer commentary discusses what should be the appropriate explanatory scope of a rules versus similarity proposal and accordingly evaluates the Rules versus Similarity one. Additionally, coherence, goals, and commitment are presented as inferential notions, fully consistent with the Rules versus Similarity distinction, that allow us to predict when Rules would be preferred to Similarity.
Shepard's theoretical analysis of generalization is assumed to enable an objective measure of the relation between objects, an assumption taken on board by Tenenbaum & Griffiths. I argue that context effects apply to generalization in the same way as they apply to similarity. Thus, the need to extend Shepard's formalism in a way that incorporates context effects should be acknowledged. [Shepard; Tenenbaum & Griffiths].
We provide additional support for Cowan's claim that short term memory (STM) involves a range of 3–5 tokens, on the basis of language correlational analyses. If language is at least partly learned, linguistic dependency structure should reflect properties of the cognitive components mediating learning; one such component is STM. In this view, the range over which statistical regularity extends in ordinary text would be suggestive of STM span. Our analyses of eight languages are consistent with STM span being about four (...) chunks. (shrink)