Are people rational? This question was central to Greek thought and has been at the heart of psychology and philosophy for millennia. This book provides a radical and controversial reappraisal of conventional wisdom in the psychology of reasoning, proposing that the Western conception of the mind as a logical system is flawed at the very outset. It argues that cognition should be understood in terms of probability theory, the calculus of uncertain reasoning, rather than in terms of logic, the calculus (...) of certain reasoning. (shrink)
'The Probabilistic Mind' is a follow-up to the influential and highly cited 'Rational Models of Cognition'. It brings together developments in understanding how, and how far, high-level cognitive processes can be understood in rational terms, and particularly using probabilistic Bayesian methods.
According to Aristotle, humans are the rational animal. The borderline between rationality and irrationality is fundamental to many aspects of human life including the law, mental health, and language interpretation. But what is it to be rational? One answer, deeply embedded in the Western intellectual tradition since ancient Greece, is that rationality concerns reasoning according to the rules of logic – the formal theory that specifies the inferential connections that hold with certainty between propositions. Piaget viewed logical reasoning as defining (...) the end-point of cognitive development; and contemporary psychology of reasoning has focussed on comparing human reasoning against logical standards. (shrink)
This book shows how these developments have led researchers to view people's conditional reasoning behaviour more as succesful probabilistic reasoning rather ...
We examine in detail three classic reasoning fallacies, that is, supposedly ``incorrect'' forms of argument. These are the so-called argumentam ad ignorantiam, the circular argument or petitio principii, and the slippery slope argument. In each case, the argument type is shown to match structurally arguments which are widely accepted. This suggests that it is not the form of the arguments as such that is problematic but rather something about the content of those examples with which they are typically justified. This (...) leads to a Bayesian reanalysis of these classic argument forms and a reformulation of the conditions under which they do or do not constitute legitimate forms of argumentation. (shrink)
A recent development in the cognitive science of reasoning has been the emergence of a probabilistic approach to the behaviour observed on ostensibly logical tasks. According to this approach the errors and biases documented on these tasks occur because people import their everyday uncertain reasoning strategies into the laboratory. Consequently participants' apparently irrational behaviour is the result of comparing it with an inappropriate logical standard. In this article, we contrast the probabilistic approach with other approaches to explaining rationality, and then (...) show how it has been applied to three main areas of logical reasoning: conditional inference, Wason's selection task and syllogistic reasoning. (shrink)
The notion of “the burden of proof” plays an important role in real-world argumentation contexts, in particular in law. It has also been given a central role in normative accounts of argumentation, and has been used to explain a range of classic argumentation fallacies. We argue that in law the goal is to make practical decisions whereas in critical discussion the goal is frequently simply to increase or decrease degree of belief in a proposition. In the latter case, it is (...) not necessarily important whether that degree of belief exceeds a particular threshold (e.g., ‘reasonable doubt’). We explore the consequences of this distinction for the role that the “burden of proof” has played in argumentation and in theories of fallacy. (shrink)
In this article, we argue for the general importance of normative theories of argument strength. We also provide some evidence based on our recent work on the fallacies as to why Bayesian probability might, in fact, be able to supply such an account. In the remainder of the article we discuss the general characteristics that make a specifically Bayesian approach desirable, and critically evaluate putative flaws of Bayesian probability that have been raised in the argumentation literature.
If Bayesian Fundamentalism existed, Jones & Love's (J&L's) arguments would provide a necessary corrective. But it does not. Bayesian cognitive science is deeply concerned with characterizing algorithms and representations, and, ultimately, implementations in neural circuits; it pays close attention to environmental structure and the constraints of behavioral data, when available; and it rigorously compares multiple models, both within and across papers. J&L's recommendation of Bayesian Enlightenment corresponds to past, present, and, we hope, future practice in Bayesian cognitive science.
Four experiments investigated the effects of probability manipulations on the indicative four card selection task (Wason, 1966, 1968). All looked at the effects of high and low probability antecedents (p) and consequents (q) on participants' data selections when determining the truth or falsity of a conditional rule, if p then q . Experiments 1 and 2 also manipulated believability. In Experiment 1, 128 participants performed the task using rules with varied contents pretested for probability of occurrence. Probabilistic effects were observed (...) which were partly consistent with some probabilistic accounts but not with non-probabilistic approaches to selection task performance. No effects of believability were observed, a finding replicated in Experiment 2 which used 80 participants with standardised and familiar contents. Some effects in this experiment appeared inconsistent with existing probabilistic approaches. To avoid possible effects of content, Experiments 3 (48 participants) and 4 (20 participants) used abstract material. Both experiments revealed probabilistic effects. In the Discussion we examine the compatibility of these results with the various models of selection task performance. (shrink)
Much research on judgment and decision making has focussed on the adequacy of classical rationality as a description of human reasoning. But more recently it has been argued that classical rationality should also be rejected even as normative standards for human reasoning. For example, Gigerenzer and Goldstein and Gigerenzer and Todd argue that reasoning involves “fast and frugal” algorithms which are not justified by rational norms, but which succeed in the environment. They provide three lines of argument for this view, (...) based on: the importance of the environment; the existence of cognitive limitations; and the fact that an algorithm with no apparent rational basis, Take-the-Best, succeeds in an judgment task. We reconsider –, arguing that standard patterns of explanation in psychology and the social and biological sciences, use rational norms to explain why simple cognitive algorithms can succeed. We also present new computer simulations that compare Take-the-Best with other cognitive models. Although Take-the-Best still performs well, it does not perform noticeably better than the other models. We conclude that these results provide no strong reason to prefer Take-the-Best over alternative cognitive models. (shrink)
Rational analysis (Anderson 1990, 1991a) is an empiricalprogram of attempting to explain why the cognitive system isadaptive, with respect to its goals and the structure of itsenvironment. We argue that rational analysis has two importantimplications for philosophical debate concerning rationality. First,rational analysis provides a model for the relationship betweenformal principles of rationality (such as probability or decisiontheory) and everyday rationality, in the sense of successfulthought and action in daily life. Second, applying the program ofrational analysis to research on human reasoning (...) leads to a radicalreinterpretation of empirical results which are typically viewed asdemonstrating human irrationality. (shrink)
It has been argued that dual process theories are not consistent with Oaksford and Chater’s probabilistic approach to human reasoning (Oaksford and Chater in Psychol Rev 101:608–631, 1994 , 2007 ; Oaksford et al. 2000 ), which has been characterised as a “single-level probabilistic treatment[s]” (Evans 2007 ). In this paper, it is argued that this characterisation conflates levels of computational explanation. The probabilistic approach is a computational level theory which is consistent with theories of general cognitive architecture that invoke (...) a WM system and an LTM system. That is, it is a single function dual process theory which is consistent with dual process theories like Evans’ ( 2007 ) that use probability logic (Adams 1998 ) as an account of analytic processes. This approach contrasts with dual process theories which propose an analytic system that respects standard binary truth functional logic (Heit and Rotello in J Exp Psychol Learn 36:805–812, 2010 ; Klauer et al. in J Exp Psychol Learn 36:298–323, 2010 ; Rips in Psychol Sci 12:29–134, 2001 , 2002 ; Stanovich in Behav Brain Sci 23:645–726, 2000 , 2011 ). The problems noted for this latter approach by both Evans Psychol Bull 128:978–996, ( 2002 , 2007 ) and Oaksford and Chater (Mind Lang 6:1–38, 1991 , 1998 , 2007 ) due to the defeasibility of everyday reasoning are rehearsed. Oaksford and Chater’s ( 2010 ) dual systems implementation of their probabilistic approach is then outlined and its implications discussed. In particular, the nature of cognitive decoupling operations are discussed and a Panglossian probabilistic position developed that can explain both modal and non-modal responses and correlations with IQ in reasoning tasks. It is concluded that a single function probabilistic approach is as compatible with the evidence supporting a dual systems theory. (shrink)
Judea Pearl has argued that counterfactuals and causality are central to intelligence, whether natural or artificial, and has helped create a rich mathematical and computational framework for formally analyzing causality. Here, we draw out connections between these notions and various current issues in cognitive science, including the nature of mental “programs” and mental representation. We argue that programs (consisting of algorithms and data structures) have a causal (counterfactual-supporting) structure; these counterfactuals can reveal the nature of mental representations. Programs can also (...) provide a causal model of the external world. Such models are, we suggest, ubiquitous in perception, cognition, and language processing. (shrink)
Human cognition requires coping with a complex and uncertain world. This suggests that dealing with uncertainty may be the central challenge for human reasoning. In Bayesian Rationality we argue that probability theory, the calculus of uncertainty, is the right framework in which to understand everyday reasoning. We also argue that probability theory explains behavior, even on experimental tasks that have been designed to probe people's logical reasoning abilities. Most commentators agree on the centrality of uncertainty; some suggest that there is (...) a residual role for logic in understanding reasoning; and others put forward alternative formalisms for uncertain reasoning, or raise specific technical, methodological, or empirical challenges. In responding to these points, we aim to clarify the scope and limits of probability and logic in cognitive science; explore the meaning of the explanation of cognition; and re-evaluate the empirical case for Bayesian rationality. (shrink)
In this paper the arguments for optimal data selection and the contrast class account of negations in the selection task and the conditional inference task are summarised, and contrasted with the matching bias approach. It is argued that the probabilistic contrast class account provides a unified, rational explanation for effects across these tasks. Moreover, there are results that are only explained by the contrast class account that are also discussed. The only major anomaly is the explicit negations effect in the (...) selection task (Evans, Clibbens, & Rood, 1996), which it is argued may not be the result of normal interpretative processes. It is concluded that the effects of negation on human reasoning provide good evidence for the view that human reasoning processes may be rational according to a probabilistic standard. (shrink)
Psychologists are beginning to uncover the rational basis for many of the biases revealed over the last 50 years in deductive and causal reasoning, judgment, and decision making. In this article, it is argued that a manipulation, experiential learning, shown to be effective in judgment and decision making, may elucidate the rational underpinning of the implicit negation effect in conditional inference. In three experiments, this effect was created and removed by using probabilistically structured contrast sets acquired during a brief learning (...) phase. No other theory of the implicit negations effect predicts these results, which can be modeled using Bayes nets as in causal approaches to category structure. It is also shown how these results relate to a recent development in the psychology of reasoning called “inferentialism.” It is concluded that many of the same cognitive mechanisms that underpin causal reasoning, judgment and decision making may be common to logical reasoning, which may require no special purpose machinery or module. (shrink)
British psychologists have been at the forefront of research into human reasoning for 40 years. This article describes some past research milestones within this tradition before outlining the major theoretical positions developed in the UK. Most British reasoning researchers have contributed to one or more of these positions. We identify a common theme that is emerging in all these approaches, that is, the problem of explaining how prior general knowledge affects reasoning. In our concluding comments we outline the challenges for (...) future research posed by this problem. (shrink)
This comment suggests that Pothos & Busmeyer (P&B) do not provide an intuitive rational foundation for quantum probability (QP) theory to parallel standard logic and classical probability (CP) theory. In particular, the intuitive foundation for standard logic, which underpins CP, is the elimination of contradictions – that is, believing p and not-p is bad. Quantum logic, which underpins QP, explicitly denies non-contradiction, which seems deeply counterintuitive for the macroscopic world about which people must reason. I propose a possible resolution in (...) situation theory. (shrink)
Mere facts about how the world is cannot determine how we ought to think or behave. Elqayam & Evans (E&E) argue that this undercuts the use of rational analysis in explaining how people reason, by ourselves and with others. But this presumed application of the fallacy is itself fallacious. Rational analysis seeks to explain how people do reason, for example in laboratory experiments, not how they ought to reason. Thus, no ought is derived from an is; and rational analysis is (...) unchallenged by E&E's arguments. (shrink)
Classical symbolic computational models of cognition are at variance with the empirical findings in the cognitive psychology of memory and inference. Standard symbolic computers are well suited to remembering arbitrary lists of symbols and performing logical inferences. In contrast, human performance on such tasks is extremely limited. Standard models donot easily capture content addressable memory or context sensitive defeasible inference, which are natural and effortless for people. We argue that Connectionism provides a more natural framework in which to model this (...) behaviour. In addition to capturing the gross human performance profile, Connectionist systems seem well suited to accounting for the systematic patterns of errors observed in the human data. We take these arguments to counter Fodor and Pylyshyn's (1988) recent claim that Connectionism is, in principle, irrelevant to psychology. (shrink)
ABSTRACTOaksford and Chater critiqued the logic programming approach to nonmonotonicity and proposed that a Bayesian probabilistic approach to conditional reasoning provided a more empirically adequate theory. The current paper is a reply to Stenning and van Lambalgen's rejoinder to this earlier paper entitled ‘Logic programming, probability, and two-system accounts of reasoning: a rejoinder to Oaksford and Chater’ in Thinking and Reasoning. It is argued that causation is basic in human cognition and that explaining how abnormality lists are created in LP (...) requires causal models. Each specific rejoinder to the original critique is then addressed. While many areas of agreement are identified, with respect to the key differences, it is concluded the current evidence favours the Bayesian approach, at least for the moment. (shrink)