This article presents a fundamental advance in the theory of mental models as an explanation of reasoning about facts, possibilities, and probabilities. It postulates that the meanings of compound assertions, such as conditionals (if) and disjunctions (or), unlike those in logic, refer to conjunctions of epistemic possibilities that hold in default of information to the contrary. Various factors such as general knowledge can modulate these interpretations. New information can always override sentential inferences; that is, reasoning in daily life is defeasible (...) (or nonmonotonic). The theory is a dual process one: It distinguishes between intuitive inferences (based on system 1) and deliberative inferences (based on system 2). The article describes a computer implementation of the theory, including its two systems of reasoning, and it shows how the program simulates crucial predictions that evidence corroborates. It concludes with a discussion of how the theory contrasts with those based on logic or on probabilities. (shrink)
We describe a dual-process theory of how individuals estimate the probabilities of unique events, such as Hillary Clinton becoming U.S. President. It postulates that uncertainty is a guide to improbability. In its computer implementation, an intuitive system 1 simulates evidence in mental models and forms analog non-numerical representations of the magnitude of degrees of belief. This system has minimal computational power and combines evidence using a small repertoire of primitive operations. It resolves the uncertainty of divergent evidence for single events, (...) for conjunctions of events, and for inclusive disjunctions of events, by taking a primitive average of non-numerical probabilities. It computes conditional probabilities in a tractable way, treating the given event as evidence that may be relevant to the probability of the dependent event. A deliberative system 2 maps the resulting representations into numerical probabilities. With access to working memory, it carries out arithmetical operations in combining numerical estimates. Experiments corroborated the theory's predictions. Participants concurred in estimates of real possibilities. They violated the complete joint probability distribution in the predicted ways, when they made estimates about conjunctions: P, P, P, disjunctions: P, P, P, and conditional probabilities P, P, P. They were faster to estimate the probabilities of compound propositions when they had already estimated the probabilities of each of their components. We discuss the implications of these results for theories of probabilistic reasoning. (shrink)
Some philosophers argue that the principles of human reasoning are impeccable, and that mistakes are no more than momentary lapses in “information processing”. This article makes a case to the contrary. It shows that human reasoners commit systematic fallacies. The theory of mental models predicts these errors. It postulates that individuals construct mental models of the possibilities to which the premises of an inference refer. But, their models usually represent what is true in a possibility, not what is false. This (...) procedure reduces the load on working memory, and for the most part it yields valid inferences. However, as a computer program implementing the theory revealed, it leads to fallacious conclusions for certain inferences—those for which it is crucial to represent what is false in a possibility. Experiments demonstrate the variety of these fallacies and contrast them with control problems, which reasoners tend to get right. The fallacies can be compelling illusions, and they occur in reasoning based on sentential connectives such as “if” and “or”, quantifiers such as “all the artists” and “some of the artists”, on deontic relations such as “permitted” and “obligated”, and causal relations such as “causes” and “allows”. After we have reviewed the principal results, we consider the potential for alternative accounts to explain these illusory inferences. And we show how the illusions illuminate the nature of human rationality. (shrink)
Henrich et al. address how culture leads to cognitive variability and recommend that researchers be critical about the samples they investigate. However, there are other sources of variability, such as individual strategies in reasoning and the content and context on which processes operate. Because strategy and content drive variability, those factors are of primary interest, while culture is merely incidental.
Machery has usefully organized the vast heterogeneity in conceptual representation. However, we believe his argument is too narrow in tacitly assuming that concepts are comprised of only prototypes, exemplars, and theories, and also that its eliminative aspect is too strong. We examine two exceptions to Machery's representational taxonomy before considering whether doing without concepts is a good idea.