Distinctions have been proposed between systems of reasoning for centuries. This article distills properties shared by many of these distinctions and characterizes the resulting systems in light of recent findings and theoretical developments. One system is associative because its computations reflect similarity structure and relations of temporal contiguity. The other is "rule based" because it operates on symbolic structures that have logical content and variables and because its computations have the properties that are normally assigned to rules. The systems serve (...) complementary functions and can simultaneously generate different solutions to a reasoning problem. The rule-based system can suppress the associative system but not completely inhibit it. The article reviews evidence in favor of the distinction and its characterization. (shrink)
This book offers a discussion about how people think, talk, learn, and explain things in causal terms in terms of action and manipulation. Sloman also reviews the role of causality, causal models, and intervention in the basic human cognitive functions: decision making, reasoning, judgement, categorization, inductive inference, language, and learning.
The phenomenon of base-rate neglect has elicited much debate. One arena of debate concerns how people make judgments under conditions of uncertainty. Another more controversial arena concerns human rationality. In this target article, we attempt to unpack the perspectives in the literature on both kinds of issues and evaluate their ability to explain existing data and their conceptual coherence. From this evaluation we conclude that the best account of the data should be framed in terms of a dual-process model of (...) judgment, which attributes base-rate neglect to associative judgment strategies that fail to adequately represent the set structure of the problem. Base-rate neglect is reduced when problems are presented in a format that affords accurate representation in terms of nested sets of individuals. (shrink)
Conceptual features differ in how mentally tranformable they are. A robin that does not eat is harder to imagine than a robin that does not chirp. We argue that features are immutable to the extent that they are central in a network of dependency relations. The immutability of a feature reflects how much the internal structure of a concept depends on that feature; i.e., how much the feature contributes to the concept's coherence. Complementarily, mutability reflects the aspects in which a (...) concept is flexible. We show that features can be reliably ordered according to their mutability using tasks that require people to conceive of objects missing a feature, and that mutability (conceptual centrality) can be distinguished from category centrality and from diagnosticity and salience. We test a model of mutability based on asymmetric, unlabeled, pairwise dependency relations. With no free parameters, the model provides reasonable fits to data. Qualitative tests of the model show that mutability judgments are unaffected by the type of dependency relation and that dependency structure influences judgments of variability. (shrink)
A normative framework for modeling causal and counterfactual reasoning has been proposed by Spirtes, Glymour, and Scheines. The framework takes as fundamental that reasoning from observation and intervention differ. Intervention includes actual manipulation as well as counterfactual manipulation of a model via thought. To represent intervention, Pearl employed the do operator that simplifies the structure of a causal model by disconnecting an intervened-on variable from its normal causes. Construing the do operator as a psychological function affords predictions about how people (...) reason when asked counterfactual questions about causal relations that we refer to as undoing, a family of effects that derive from the claim that intervened-on variables become independent of their normal causes. Six studies support the prediction for causal arguments but not consistently for parallel conditional ones. Two of the studies show that effects are treated as diagnostic when their values are observed but nondiagnostic when they are intervened on. These results cannot be explained by theories that do not distinguish interventions from other sorts of events. (shrink)
Novice designers produced a sequence of sketches while inventing a logo for a novel brand of soft drink. The sketches were scored for the presence of specific objects, their local features and global composition. Self‐assessment scores for each sketch and art critics' scores for the end products were collected. It was investigated whether the design evolves in an essentially random fashion or according to an overall heuristic. The results indicated a macrostructure in the evolution of the design, characterized by two (...) stages. For the majority of participants, the first stage is marked by the introduction and modification of novel objects and their local and global aspects; the second stage is characterized by changes in their global composition. The minority that showed the better designs has a different strategy, in which most global changes were made in the beginning. Although participants did not consciously apply these strategies, their self‐assessment scores reflect the stages of the process. (shrink)
How do people understand questions about cause and prevent? Some theories propose that people affirm that A causes B if A's occurrence makes a difference to B's occurrence in one way or another. Other theories propose that A causes B if some quantity or symbol gets passed in some way from A to B. The aim of our studies is to compare these theories' ability to explain judgements of causation and prevention. We describe six experiments that compare judgements for causal (...) paths that involve a mechanism, i.e. a continuous process of transmission or exchange from cause to effect, against paths that involve no mechanism yet a change in the cause nevertheless brings about a change in the effect. Our results show that people prefer to attribute cause when a mechanism links cause to effect. In contrast, prevention is sensitive both to the presence of an interruption to a causal mechanism and to a change in the outcome in the absence of a mechanism. In this sense, ‘prevent’ means something different than ‘cause not'. We discuss the implications of our results for existing theories of causation. (shrink)
We propose a causal model theory to explain asymmetries in judgments of the intentionality of a foreseen side-effect that is either negative or positive (Knobe, 2003). The theory is implemented as a Bayesian network relating types of mental states, actions, and consequences that integrates previous hypotheses. It appeals to two inferential routes to judgment about the intentionality of someone else's action: bottom-up from action to desire and top-down from character and disposition. Support for the theory comes from three experiments that (...) test the prediction that bottom-up inference should occur only when the actor's primary objective is known. The model fits intentionality judgments reasonably well with no free parameters. (shrink)
The verbs cause, enable, and prevent express beliefs about the way the world works. We offer a theory of their meaning in terms of the structure of those beliefs expressed using qualitative properties of causal models, a graphical framework for representing causal structure. We propose that these verbs refer to a causal model relevant to a discourse and that “A causes B” expresses the belief that the causal model includes a link from A to B. “A enables/allows B” entails that (...) the model includes a link from A to B, that A represents a category of events necessary for B, and that an alternative cause of B exists. “A prevents B” entails that the model includes a link from A to B and that A reduces the likelihood of B. This theory is able to account for the results of four experiments as well as a variety of existing data on human reasoning. (shrink)
The likelihood of a statement is often derived by generating an explanation for it and evaluating the plausibility of the explanation. The explanation discounting principle states that people tend to focus on a single explanation; alternative explanations compete with the effect of reducing one another’s credibility. Two experiments tested the hypothesis that this principle applies to inductive inferences concerning the properties of everyday categories. In both experiments, subjects estimated the probability of a series of statements and the conditional probabilities of (...) those conclusions given other related facts. For example, given that most lawyers make good sales people, what is the probability that most psychologists make good sales people? The results showed that when the fact and the conclusion had the same explanation the fact increased people’s willingness to believe the conclusion, but when they had different explanations the fact decreased the conclusion’s credibility. This decrease is attributed to explanation discounting; the explanation for the fact had the effect of reducing the plausibility of the explanation for the conclusion. (shrink)
The study tests the hypothesis that conditional probability judgments can be influenced by causal links between the target event and the evidence even when the statistical relations among variables are held constant. Three experiments varied the causal structure relating three variables and found that (a) the target event was perceived as more probable when it was linked to evidence by a causal chain than when both variables shared a common cause; (b) predictive chains in which evidence is a cause of (...) the hypothesis gave rise to higher judgments than diagnostic chains in which evidence is an effect of the hypothesis; and (c) direct chains gave rise to higher judgments than indirect chains. A Bayesian learning model was applied to our data but failed to explain them. An explanation-based hypothesis stating that statistical information will affect judgments only to the extent that it changes beliefs about causal structure is consistent with the results. (shrink)
Studies of categorical induction typically examine how belief in a premise (e.g., Falcons have an ulnar artery) projects on to a conclusion (e.g., Robins have an ulnar artery). We study induction in cases in which the premise is uncertain (e.g., There is an 80% chance that falcons have an ulnar artery). Jeffrey's rule is a normative model for updating beliefs in the face of uncertain evidence. In three studies we tested the descriptive validity of Jeffrey's rule and a related probability (...) theorem, the rule of total probability. Although these rules provided good approximations to mean judgments in some cases, the results from regression and correlation analyses suggest that participants focus on the parts of these rules that are associated with the highest overall probability. We relate our findings to rational models of judgment. (shrink)
The psychology of reasoning is increasingly considering agents' values and preferences, achieving greater integration with judgment and decision making, social cognition, and moral reasoning. Some of this research investigates utility conditionals, ‘‘if p then q’’ statements where the realization of p or q or both is valued by some agents. Various approaches to utility conditionals share the assumption that reasoners make inferences from utility conditionals based on the comparison between the utility of p and the expected utility of q. This (...) article introduces a new parameter in this analysis, the underlying causal structure of the conditional. Four experiments showed that causal structure moderated utility-informed conditional reasoning. These inferences were strongly invited when the underlying structure of the conditional was causal, and significantly less so when the underlying structure of the conditional was diagnostic. This asymmetry was only observed for conditionals in which the utility of q was clear, and disappeared when the utility of q was unclear. Thus, an adequate account of utility-informed inferences conditional reasoning requires three components: utility, probability, and causal structure. (shrink)
Statements that share an explanation tend to lend inductive support to one another. For example, being told that Many furniture movers have a hard time financing a house increases the judged probability that Secretaries have a hard time financing a house. In contrast, statements with different explanations reduce one another s judged probability. Being told that Many furniture movers have bad backs decreases the judged probability that Secretaries have bad backs. I pose two questions concerning such discounting effects. First, does (...) the reduction depend on explanations being mutually incompatible or does it occur when explanations are deemed irrelevant to one another? I found that a small discounting effect occurred with statements that were blatantly unrelated. However, the discounting effect also depended on a factor external to the argument being judged; the composition of the argument set. Second, are explanation effects attributable to changes in the belief afforded statements or to response-specific changes resulting from misunderstanding of the probability rating task or response bias? The results implicate changes in belief. Prior belief influenced conditional probability more than argument strength judgements, as it would if participants understood the tasks in the same way as the experimenter. Also, conditional probability true and false judgements were complementary, suggesting no response bias. (shrink)
Judea Pearl won the 2010 Rumelhart Prize in computational cognitive science due to his seminal contributions to the development of Bayes nets and causal Bayes nets, frameworks that are central to multiple domains of the computational study of mind. At the heart of the causal Bayes nets formalism is the notion of a counterfactual, a representation of something false or nonexistent. Pearl refers to Bayes nets as oracles for intervention, and interventions can tell us what the effect of action will (...) be or what the effect of counterfactual possibilities would be. Counterfactuals turn out to be necessary to understand thought, perception, and language. This selection of papers tells us why, sometimes in ways that support the Bayes net framework and sometimes in ways that challenge it. (shrink)
We highlight one way in which Jones & Love (J&L) misconstrue the Bayesian program: Bayesian models do not represent a rejection of mechanism. This mischaracterization obscures the valid criticisms in their article. We conclude that computational-level Bayesian modeling should not be rejected or discouraged a priori, but should be held to the same empirical standards as other models.
The commentaries indicate a general agreement that one source of reduction of base-rate neglect involves making structural relations among relevant sets transparent. There is much less agreement, however, that this entails dual systems of reasoning. In this response, we make the case for our perspective on dual systems. We compare and contrast our view to the natural frequency hypothesis as formulated in the commentaries.
A feature is central to a concept to the extent that other features depend on it. Four studies tested the hypothesis that people will project a feature from a base concept to a target concept to the extent that they believe the feature is central to the two concepts. This centrality hypothesis implies that feature projection is guided by a principle that aims to maximize the structural commonality between base and target concepts. Participants were told that a category has two (...) or three novel features. One feature was the most central in that more properties depended on it. The extent to which the target shared the feature's dependencies was manipulated by varying the similarity of category pairs. Participants' ratings of the likelihood that each feature would hold in the target category support the centrality hypothesis with both natural kind and artifact categories and with both well-specified and vague dependency structures. (shrink)
In most cases, rule-governed relations and similarity relations can indeed be distinguished by the number of relevant features they require. This criterion is not sufficient, however, to explain other properties of the relations that have a more dichotomous character. I focus on the differential drive for consistency by inferential processes that draw on the two types of relations.
This chapter explains the screening‐off rule in the psychological laboratory. The Markov assumption states that any variable in a set is independent in probability of all its ancestors in the set conditional on its own parents. The screening‐off rule is also critical to allow Bayes nets to make an inference of the state of an unknown variable in a causal structure from the states of other variables in that structure. The chapter examines which causal representations people use to make predictions (...) and whether people conform to the screening‐off rule with respect to a causal model they have in mind. It analyzes what leads them to modify their causal model and whether or not people are causally oriented when making predictions. Making probability judgments is hard and requires careful deliberation. People are capable of such deliberation although they avoid it until they cannot, until the facts require more careful thought. (shrink)
Quantum probability theory (QP) is the best formal representation available of the most common form of judgment involving attribute comparison (inside judgment). People are capable, however, of judgments that involve proportions over sets of instances (outside judgment). Here, the theory does not do so well. I discuss the theory both in terms of descriptive adequacy and normative appropriateness.