Are people rational? This question was central to Greek thought and has been at the heart of psychology and philosophy for millennia. This book provides a radical and controversial reappraisal of conventional wisdom in the psychology of reasoning, proposing that the Western conception of the mind as a logical system is flawed at the very outset. It argues that cognition should be understood in terms of probability theory, the calculus of uncertain reasoning, rather than in terms of logic, the calculus (...) of certain reasoning. (shrink)
According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic – the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining (...) the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards. (shrink)
'The Probabilistic Mind' is a follow-up to the influential and highly cited 'Rational Models of Cognition'. It brings together developments in understanding how, and how far, high-level cognitive processes can be understood in rational terms, and particularly using probabilistic Bayesian methods.
This book explores a new approach to understanding the human mind - rational analysis - that regards thinking as a facility adapted to the structure of the world. This approach is most closely associated with the work of John R Anderson, who published the original book on rational analysis in 1990. Since then, a great deal of work has been carried out in a number of laboratories around the world, and the aim of this book is to bring this work (...) together for the benefit of the general psychological audience. The book contains chapters by some of the world's leading researchers in memory, categorisation, reasoning, and search, who show how the power of rational analysis can be applied to the central question of how humans think. It will be of interest to students and researchers in cognitive psychology, cognitive science, and animal behaviour. (shrink)
A recent development in the cognitive science of reasoning has been the emergence of a probabilistic approach to the behaviour observed on ostensibly logical tasks. According to this approach the errors and biases documented on these tasks occur because people import their everyday uncertain reasoning strategies into the laboratory. Consequently participants' apparently irrational behaviour is the result of comparing it with an inappropriate logical standard. In this article, we contrast the probabilistic approach with other approaches to explaining rationality, and then (...) show how it has been applied to three main areas of logical reasoning: conditional inference, Wason's selection task and syllogistic reasoning. (shrink)
We examine in detail three classic reasoning fallacies, that is, supposedly ``incorrect'' forms of argument. These are the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argument. In each case, the argument type is shown to match structurally arguments which are widely accepted. This suggests that it is not the form of the arguments as such that is problematic but rather something about the content of those examples with which they are typically justified. This (...) leads to a Bayesian reanalysis of these classic argument forms and a reformulation of the conditions under which they do or do not constitute legitimate forms of argumentation. (shrink)
The notion of “the burden of proof” plays an important role in real-world argumentation contexts, in particular in law. It has also been given a central role in normative accounts of argumentation, and has been used to explain a range of classic argumentation fallacies. We argue that in law the goal is to make practical decisions whereas in critical discussion the goal is frequently simply to increase or decrease degree of belief in a proposition. In the latter case, it is (...) not necessarily important whether that degree of belief exceeds a particular threshold (e.g., ‘reasonable doubt’). We explore the consequences of this distinction for the role that the “burden of proof” has played in argumentation and in theories of fallacy. (shrink)
In this article, we argue for the general importance of normative theories of argument strength. We also provide some evidence based on our recent work on the fallacies as to why Bayesian probability might, in fact, be able to supply such an account. In the remainder of the article we discuss the general characteristics that make a specifically Bayesian approach desirable, and critically evaluate putative flaws of Bayesian probability that have been raised in the argumentation literature.
This paper addresses the apparent mismatch between the normative and descriptive literatures in the cognitive science of conditional reasoning. Descriptive psychological theories still regard material implication as the normative theory of the conditional. However, over the last 20 years in the philosophy of language and logic the idea that material implication can account for everyday indicative conditionals has been subject to severe criticism. The majority view is now apparently in favour of a subjective conditional probability interpretation. A comparative model fitting (...) exercise is presented that shows that a conditional probability model can explain as much of the data on abstract indicative conditional reasoning tasks as psychological theories that supplement material implication with various rationally unjustified processing assumptions. Consequently, when people are asked to solve laboratory reasoning tasks, they can be seen as simply generalising their everyday probabilistic reasoning strategies to this novel context. (shrink)
If Bayesian Fundamentalism existed, Jones & Love's (J&L's) arguments would provide a necessary corrective. But it does not. Bayesian cognitive science is deeply concerned with characterizing algorithms and representations, and, ultimately, implementations in neural circuits; it pays close attention to environmental structure and the constraints of behavioral data, when available; and it rigorously compares multiple models, both within and across papers. J&L's recommendation of Bayesian Enlightenment corresponds to past, present, and, we hope, future practice in Bayesian cognitive science.
Rational analysis (Anderson 1990, 1991a) is an empiricalprogram of attempting to explain why the cognitive system isadaptive, with respect to its goals and the structure of itsenvironment. We argue that rational analysis has two importantimplications for philosophical debate concerning rationality. First,rational analysis provides a model for the relationship betweenformal principles of rationality (such as probability or decisiontheory) and everyday rationality, in the sense of successfulthought and action in daily life. Second, applying the program ofrational analysis to research on human reasoning (...) leads to a radicalreinterpretation of empirical results which are typically viewed asdemonstrating human irrationality. (shrink)
Four experiments investigated the effects of probability manipulations on the indicative four card selection task (Wason, 1966, 1968). All looked at the effects of high and low probability antecedents (p) and consequents (q) on participants' data selections when determining the truth or falsity of a conditional rule, if p then q . Experiments 1 and 2 also manipulated believability. In Experiment 1, 128 participants performed the task using rules with varied contents pretested for probability of occurrence. Probabilistic effects were observed (...) which were partly consistent with some probabilistic accounts but not with non-probabilistic approaches to selection task performance. No effects of believability were observed, a finding replicated in Experiment 2 which used 80 participants with standardised and familiar contents. Some effects in this experiment appeared inconsistent with existing probabilistic approaches. To avoid possible effects of content, Experiments 3 (48 participants) and 4 (20 participants) used abstract material. Both experiments revealed probabilistic effects. In the Discussion we examine the compatibility of these results with the various models of selection task performance. (shrink)
It has been argued that dual process theories are not consistent with Oaksford and Chater’s probabilistic approach to human reasoning (Oaksford and Chater in Psychol Rev 101:608–631, 1994 , 2007 ; Oaksford et al. 2000 ), which has been characterised as a “single-level probabilistic treatment[s]” (Evans 2007 ). In this paper, it is argued that this characterisation conflates levels of computational explanation. The probabilistic approach is a computational level theory which is consistent with theories of general cognitive architecture that invoke (...) a WM system and an LTM system. That is, it is a single function dual process theory which is consistent with dual process theories like Evans’ ( 2007 ) that use probability logic (Adams 1998 ) as an account of analytic processes. This approach contrasts with dual process theories which propose an analytic system that respects standard binary truth functional logic (Heit and Rotello in J Exp Psychol Learn 36:805–812, 2010 ; Klauer et al. in J Exp Psychol Learn 36:298–323, 2010 ; Rips in Psychol Sci 12:29–134, 2001 , 2002 ; Stanovich in Behav Brain Sci 23:645–726, 2000 , 2011 ). The problems noted for this latter approach by both Evans Psychol Bull 128:978–996, ( 2002 , 2007 ) and Oaksford and Chater (Mind Lang 6:1–38, 1991 , 1998 , 2007 ) due to the defeasibility of everyday reasoning are rehearsed. Oaksford and Chater’s ( 2010 ) dual systems implementation of their probabilistic approach is then outlined and its implications discussed. In particular, the nature of cognitive decoupling operations are discussed and a Panglossian probabilistic position developed that can explain both modal and non-modal responses and correlations with IQ in reasoning tasks. It is concluded that a single function probabilistic approach is as compatible with the evidence supporting a dual systems theory. (shrink)
Much research on judgment and decision making has focussed on the adequacy of classical rationality as a description of human reasoning. But more recently it has been argued that classical rationality should also be rejected even as normative standards for human reasoning. For example, Gigerenzer and Goldstein and Gigerenzer and Todd argue that reasoning involves “fast and frugal” algorithms which are not justified by rational norms, but which succeed in the environment. They provide three lines of argument for this view, (...) based on: the importance of the environment; the existence of cognitive limitations; and the fact that an algorithm with no apparent rational basis, Take-the-Best, succeeds in an judgment task. We reconsider –, arguing that standard patterns of explanation in psychology and the social and biological sciences, use rational norms to explain why simple cognitive algorithms can succeed. We also present new computer simulations that compare Take-the-Best with other cognitive models. Although Take-the-Best still performs well, it does not perform noticeably better than the other models. We conclude that these results provide no strong reason to prefer Take-the-Best over alternative cognitive models. (shrink)
Mercier and Sperber illuminate many aspects of reasoning and rationality, providing refreshing and thoughtful analysis and elegant and well‐researched illustrations. They make a good case that reasoning should be viewed as a type of intuition, rather than a separate cognitive process or system. Yet questions remain. In what sense, if any, is reasoning a “module?” What is the link between rationality within an individual and rationality defined through the interaction between individuals? Formal theories of rationality, from logic, probability theory and (...) game theory, while not the focus of Mercier and Sperber's book, may help clarify this latter question. (shrink)
Human cognition requires coping with a complex and uncertain world. This suggests that dealing with uncertainty may be the central challenge for human reasoning. In Bayesian Rationality we argue that probability theory, the calculus of uncertainty, is the right framework in which to understand everyday reasoning. We also argue that probability theory explains behavior, even on experimental tasks that have been designed to probe people's logical reasoning abilities. Most commentators agree on the centrality of uncertainty; some suggest that there is (...) a residual role for logic in understanding reasoning; and others put forward alternative formalisms for uncertain reasoning, or raise specific technical, methodological, or empirical challenges. In responding to these points, we aim to clarify the scope and limits of probability and logic in cognitive science; explore the meaning of the explanation of cognition; and re-evaluate the empirical case for Bayesian rationality. (shrink)
Judea Pearl has argued that counterfactuals and causality are central to intelligence, whether natural or artificial, and has helped create a rich mathematical and computational framework for formally analyzing causality. Here, we draw out connections between these notions and various current issues in cognitive science, including the nature of mental “programs” and mental representation. We argue that programs (consisting of algorithms and data structures) have a causal (counterfactual-supporting) structure; these counterfactuals can reveal the nature of mental representations. Programs can also (...) provide a causal model of the external world. Such models are, we suggest, ubiquitous in perception, cognition, and language processing. (shrink)
In this paper the arguments for optimal data selection and the contrast class account of negations in the selection task and the conditional inference task are summarised, and contrasted with the matching bias approach. It is argued that the probabilistic contrast class account provides a unified, rational explanation for effects across these tasks. Moreover, there are results that are only explained by the contrast class account that are also discussed. The only major anomaly is the explicit negations effect in the (...) selection task (Evans, Clibbens, & Rood, 1996), which it is argued may not be the result of normal interpretative processes. It is concluded that the effects of negation on human reasoning provide good evidence for the view that human reasoning processes may be rational according to a probabilistic standard. (shrink)
British psychologists have been at the forefront of research into human reasoning for 40 years. This article describes some past research milestones within this tradition before outlining the major theoretical positions developed in the UK. Most British reasoning researchers have contributed to one or more of these positions. We identify a common theme that is emerging in all these approaches, that is, the problem of explaining how prior general knowledge affects reasoning. In our concluding comments we outline the challenges for (...) future research posed by this problem. (shrink)
Computational-level models proposed in recent Bayesian cognitive science predict both the “biased” and correct responses on many tasks. So, rather than possessing two reasoning systems, people can generate both possible responses within a single system. Consequently, although an account of why people make one response rather than another is required, dual processes of reasoning may not be.
This comment suggests that Pothos & Busmeyer (P&B) do not provide an intuitive rational foundation for quantum probability (QP) theory to parallel standard logic and classical probability (CP) theory. In particular, the intuitive foundation for standard logic, which underpins CP, is the elimination of contradictions – that is, believing p and not-p is bad. Quantum logic, which underpins QP, explicitly denies non-contradiction, which seems deeply counterintuitive for the macroscopic world about which people must reason. I propose a possible resolution in (...) situation theory. (shrink)
Classical symbolic computational models of cognition are at variance with the empirical findings in the cognitive psychology of memory and inference. Standard symbolic computers are well suited to remembering arbitrary lists of symbols and performing logical inferences. In contrast, human performance on such tasks is extremely limited. Standard models donot easily capture content addressable memory or context sensitive defeasible inference, which are natural and effortless for people. We argue that Connectionism provides a more natural framework in which to model this (...) behaviour. In addition to capturing the gross human performance profile, Connectionist systems seem well suited to accounting for the systematic patterns of errors observed in the human data. We take these arguments to counter Fodor and Pylyshyn's (1988) recent claim that Connectionism is, in principle, irrelevant to psychology. (shrink)
ABSTRACTOaksford and Chater critiqued the logic programming approach to nonmonotonicity and proposed that a Bayesian probabilistic approach to conditional reasoning provided a more empirically adequate theory. The current paper is a reply to Stenning and van Lambalgen's rejoinder to this earlier paper entitled ‘Logic programming, probability, and two-system accounts of reasoning: a rejoinder to Oaksford and Chater’ in Thinking and Reasoning. It is argued that causation is basic in human cognition and that explaining how abnormality lists are created in LP (...) requires causal models. Each specific rejoinder to the original critique is then addressed. While many areas of agreement are identified, with respect to the key differences, it is concluded the current evidence favours the Bayesian approach, at least for the moment. (shrink)
Mere facts about how the world is cannot determine how we ought to think or behave. Elqayam & Evans (E&E) argue that this undercuts the use of rational analysis in explaining how people reason, by ourselves and with others. But this presumed application of the fallacy is itself fallacious. Rational analysis seeks to explain how people do reason, for example in laboratory experiments, not how they ought to reason. Thus, no ought is derived from an is; and rational analysis is (...) unchallenged by E&E's arguments. (shrink)
Cross-cultural differences in argumentation may be explained by the use of different norms of reasoning. However, some norms derive from, presumably universal, mathematical laws. This inconsistency can be resolved, by considering that some norms of argumentation, like Bayes theorem, are mathematical functions. Systematic variation in the inputs may produce culture-dependent inductive biases although the function remains invariant. This hypothesis was tested by fitting a Bayesian model to data on informal argumentation from Turkish and English cultures, which linguistically mark evidence quality (...) differently. The experiment varied evidential marking and informant reliability in argumentative dialogues and revealed cross-cultural differences for both independent variables. The Bayesian model fitted the data from both cultures well but there were differences in the parameters consistent with culture-specific inductive biases. These findings are related to current controversies over the universality of the norms of reasoning and the role of normative theories in the psychology of reasoning. (shrink)
Cummins (this issue) puts the case for an innate module for deontic reasoning. We argue that this case is not persuasive. First, we claim that Cummins’evolutionary arguments are neutral regarding whether deontic reasoning is learned or innate. Second, we argue that task differences between deontic and indicative reasoning explain many of the phenomena that Cummins takes as evidence for a deontic module. Third, we argue against the suggestion that deontic reasoning is superior to indicative reasoning, either in adults or children. (...) Finally, we re‐evaluate Cummins’interpretation of differences in children's performance on deontic and indicative versions of Wason's selection task. (shrink)
Knauff and Gazzo Castañeda (2022) object to using the term “new paradigm” to describe recent developments in the psychology of reasoning. This paper concedes that the Kuhnian term “paradigm” may be queried. What cannot is that the work subsumed under this heading is part of a new, progressive movement that spans the brain and cognitive sciences: Bayesian cognitive science. Sampling algorithms and Bayes nets used to explain biases in JDM can implement the Bayesian new paradigm approach belying any advantages of (...) mental models theory (MMT) at the algorithmic level. Moreover, this paper argues that new versions of MMT lack a computational level theory and questions the grounds for MMTs much-vaunted generality. The paper then examines common ground on the importance of small-scale models/simulations of the world and the importance of argumentation in the social domain rather than individual reasoning. Finally, the paper concludes that although there may be prospects for moving reasoning research forward in a more collective, collaborative manner, many disagreements remain to be resolved. (shrink)