When agents violate norms, they are typically judged to be more of a cause of resulting outcomes. In this paper, we suggest that norm violations also affect the causality attributed to other agents, a phenomenon we refer to as "causal superseding." We propose and test a counterfactual reasoning model of this phenomenon in four experiments. Experiments 1 and 2 provide an initial demonstration of the causal superseding effect and distinguish it from previously studied effects. Experiment 3 shows that this causal (...) superseding effect is dependent on a particular event structure, following a prediction of our counterfactual model. Experiment 4 demonstrates that causal superseding can occur with violations of non-moral norms. We propose a model of the superseding effect based on the idea of counterfactual sufficiency. (shrink)
In temporal binding, the temporal interval between one event and another, occurring some time later, is subjectively compressed. We discuss two ways in which temporal binding has been conceptualized. In studies showing temporal binding between a voluntary action and its causal consequences, such binding is typically interpreted as providing a measure of an implicit or pre-reflective “sense of agency”. However, temporal binding has also been observed in contexts not involving voluntary action, but only the passive observation of a cause-effect sequence. (...) In those contexts, it has been interpreted as a top-down effect on perception reflecting a belief in causality. These two views need not be in conflict with one another, if one thinks of them as concerning two separate mechanisms through which temporal binding can occur. In this paper, we explore an alternative possibility: that there is a unitary way of explaining temporal binding both within and outside the context of voluntary action as a top-down effect on perception reflecting a belief in causality. Any such explanation needs to account for ways in which agency, and factors connected with agency, have been shown to affect the strength of temporal binding. We show that principles of causal inference and causal selection already familiar from the literature on causal learning have the potential to explain why the strength of people’s causal beliefs can be affected by the extent to which they are themselves actively involved in bringing about events, thus in turn affecting binding. (shrink)
A Bayesian network (BN) is a graphical model of uncertainty that is especially well suited to legal arguments. It enables us to visualize and model dependencies between different hypotheses and pieces of evidence and to calculate the revised probability beliefs about all uncertain factors when any piece of new evidence is presented. Although BNs have been widely discussed and recently used in the context of legal arguments, there is no systematic, repeatable method for modeling legal arguments as BNs. Hence, where (...) BNs have been used in the legal context, they are presented as completed pieces of work, with no insights into the reasoning and working that must have gone into their construction. This means the process of building BNs for legal arguments is ad hoc, with little possibility for learning and process improvement. This article directly addresses this problem by describing a method for building useful legal arguments in a consistent and repeatable way. The method complements and extends recent work by Hepler, Dawid, and Leucari (2007) on object-oriented BNs for complex legal arguments and is based on the recognition that such arguments can be built up from a small number of basic causal structures (referred to as idioms). We present a number of examples that demonstrate the practicality and usefulness of the method. (shrink)
How do people attribute responsibility in situations where the contributions of multiple agents combine to produce a joint outcome? The prevalence of over-determination in such cases makes this a difficult problem for counterfactual theories of causal responsibility. In this article, we explore a general framework for assigning responsibility in multiple agent contexts. We draw on the structural model account of actual causation (e.g., Halpern & Pearl, 2005) and its extension to responsibility judgments (Chockler & Halpern, 2004). We review the main (...) theoretical and empirical issues that arise from this literature and propose a novel model of intuitive judgments of responsibility. This model is a function of both pivotality (whether an agent made a difference to the outcome) and criticality (how important the agent is perceived to be for the outcome, before any actions are taken). The model explains empirical results from previous studies and is supported by a new experiment that manipulates both pivotality and criticality. We also discuss possible extensions of this model to deal with a broader range of causal situations. Overall, our approach emphasizes the close interrelations between causality, counterfactuals, and responsibility attributions. (shrink)
The experience of causation is a pervasive product of the human mind. Moreover, the experience of causing an event alters subjective time: actions are perceived as temporally shifted towards their effects [Haggard, P., Clark, S., & Kalogeras, J.. Voluntary action and conscious awareness. Nature Neuroscience, 5, 382-385]. This temporal shift depends partly on advance prediction of the effects of action, and partly on inferential "postdictive" explanations of sensory effects of action. We investigated whether a single factor of statistical contingency could (...) explain both these aspects of causal experience. We studied the time at which people perceived a simple manual action to occur, when statistical contingency indicated a causal relation between action and effect, and when no such relation was indicated. Both predictive and inferential "postdictive" shifts in the time of action depended on strong contingency between action and effect. The experience of agency involves a process of causal learning based on statistical contingency. (shrink)
A normative framework for modeling causal and counterfactual reasoning has been proposed by Spirtes, Glymour, and Scheines. The framework takes as fundamental that reasoning from observation and intervention differ. Intervention includes actual manipulation as well as counterfactual manipulation of a model via thought. To represent intervention, Pearl employed the do operator that simplifies the structure of a causal model by disconnecting an intervened-on variable from its normal causes. Construing the do operator as a psychological function affords predictions about how people (...) reason when asked counterfactual questions about causal relations that we refer to as undoing, a family of effects that derive from the claim that intervened-on variables become independent of their normal causes. Six studies support the prediction for causal arguments but not consistently for parallel conditional ones. Two of the studies show that effects are treated as diagnostic when their values are observed but nondiagnostic when they are intervened on. These results cannot be explained by theories that do not distinguish interventions from other sorts of events. (shrink)
It is well established that the temporal proximity of two events is a fundamental cue to causality. Recent research with adults has shown that this relation is bidirectional: events that are believed to be causally related are perceived as occurring closer together in time—the so‐called temporal binding effect. Here, we examined the developmental origins of temporal binding. Participants predicted when an event that was either caused by a button press, or preceded by a non‐causal signal, would occur. We demonstrate for (...) the first time that children as young as 4 years are susceptible to temporal binding. Binding occurred both when the button press was executed via intentional action, and when a machine caused it. These results suggest binding is a fundamental, early developing property of perception and grounded in causal knowledge. (shrink)
How do people judge the degree of causal responsibility that an agent has for the outcomes of her actions? We show that a relatively unexplored factor -- the robustness of the causal chain linking the agent’s action and the outcome -- influences judgments of causal responsibility of the agent. In three experiments, we vary robustness by manipulating the number of background circumstances under which the action causes the effect, and find that causal responsibility judgments increase with robustness. In the first (...) experiment, the robustness manipulation also raises the probability of the effect given the action. Experiments 2 and 3 control for probability-raising, and show that robustness still affects judgments of causal responsibility. In particular, Experiment 3 introduces an Ellsberg type of scenario to manipulate robustness, while keeping the conditional probability and the skill deployed in the action fixed. Experiment 4, replicates the results of Experiment 3, while contrasting between judgments of causal strength and of causal responsibility. The results show that in all cases, the perceived degree of responsibility increases with the robustness of the action-outcome causal chain. (shrink)
Bayesian models of legal arguments generally aim to produce a single integrated model, combining each of the legal arguments under consideration. This combined approach implicitly assumes that variables and their relationships can be represented without any contradiction or misalignment, and in a way that makes sense with respect to the competing argument narratives. This paper describes a novel approach to compare and ‘average’ Bayesian models of legal arguments that have been built independently and with no attempt to make them consistent (...) in terms of variables, causal assumptions or parameterization. The approach involves assessing whether competing models of legal arguments are explained or predict facts uncovered before or during the trial process. Those models that are more heavily disconfirmed by the facts are given lower weight, as model plausibility measures, in the Bayesian model comparison and averaging framework adopted. In this way a plurality of arguments is allowed yet a single judgement based on all arguments is possible and rational. (shrink)
(2013). Legal idioms: a framework for evidential reasoning. Argument & Computation: Vol. 4, Formal Models of Reasoning in Cognitive Psychology, pp. 46-63. doi: 10.1080/19462166.2012.682656.
The application of the formal framework of causal Bayesian Networks to children’s causal learning provides the motivation to examine the link between judgments about the causal structure of a system, and the ability to make inferences about interventions on components of the system. Three experiments examined whether children are able to make correct inferences about interventions on different causal structures. The first two experiments examined whether children’s causal structure and intervention judgments were consistent with one another. In Experiment 1, children (...) aged between 4 and 8 years made causal structure judgments on a three-component causal system followed by counterfactual intervention judgments. In Experiment 2, children’s causal structure judgments were followed by intervention judgments phrased as future hypotheticals. In Experiment 3, we explicitly told children what the correct causal structure was and asked them to make intervention judgments. The results of the three experiments suggest that the representations that support causal structure judgments do not easily support simple judgments about interventions in children. We discuss our findings in light of strong interventionist claims that the two types of judgments should be closely linked. (shrink)
Temporal binding refers to a phenomenon whereby the time interval between a cause and its effect is perceived as shorter than the same interval separating two unrelated events. We examined the developmental profile of this phenomenon by comparing the performance of groups of children (aged 6-7-, 7-8-, and 9-10- years) and adults on a novel interval estimation task. In Experiment 1, participants made judgments about the time interval between i) their button press and a rocket launch, and ii) a non-causal (...) predictive signal and rocket launch. In Experiment 2, an additional causal condition was included in which participants made judgments about the interval between an experimenter’s button press and the launch of a rocket. Temporal binding was demonstrated consistently and did not change in magnitude with age: estimates of delay were shorter in causal contexts for both adults and children. Additionally, the magnitude of the binding effect was greater when participants themselves were the cause of an outcome compared to when they were mere spectators. This suggests that although causality underlies the binding effect, intentional action may modulate its magnitude. Again, this was true of both adults and children. Taken together, these results are the first to suggest that the binding effect is present and developmentally constant from childhood into adulthood. (shrink)
This chapter argues that people reason about legal evidence using small-scale qualitative networks. These cognitive networks are typically qualitative and incomplete, and based on people's causal beliefs about the specifics of the case as well as the workings of the physical and social world in general. A key feature of these networks is their ability to represent qualitative relations between hypotheses and evidence, allowing reasoners to capture the concepts of dependency and relevance critical in legal contexts. In support of this (...) claim, the chapter introduces some novel empirical and formal work on alibi evidence, showing that people's reasoning conforms to the dictates of a qualitative Bayesian model. However, people's inferences do not always conform to Bayesian prescripts. Empirical studies are also discussed in which people over-extend the discredit of one item of evidence to other unrelated items. This bias is explained in terms of the propensity to group positive and negative evidence separately and the use of coherence-based inference mechanisms. It is argued that these cognitive processes are a natural response to deal with the complexity of legal evidence. (shrink)
Consider the task of predicting which soccer team will win the next World Cup. The bookmakers may judge Brazil to be the team most likely to win, but also judge it most likely that a European rather than a Latin American team will win. This is an example of a non-aligned hierarchy structure: the most probable event at the subordinate level (Brazil wins) appears to be inconsistent with the most probable event at the superordinate level (a European team wins). In (...) this paper we exploit such structures to investigate how people make predictions based on uncertain hierarchical knowledge. We distinguish between aligned and non-aligned environments, and conjecture that people assume alignment. Participants were exposed to a non-aligned training set in which the most probable superordinate category predicted one outcome, whereas the most probable subordinate category predicted a different outcome. In the test phase participants allowed their initial probability judgments about category membership to shift their final ratings of the probability of the outcome, even though all judgments were made on the basis of the same statistical data. In effect people were primed to focus on the most likely path in an inference tree, and neglect alternative paths. These results highlight the importance of the level at which statistical data are represented, and suggest that when faced with hierarchical inference problems people adopt a simplifying heuristic that assumes alignment. (shrink)
Can ownership status influence probability judgements under condition of uncertainty? In three experiments, we presented our participants with a recording of a real horse race. We endowed half of our sample with a wager on a single horse to win the race, and the other half with money to spend to acquire the same wager. Across three large studies, we found the endowment effect – owners demanded significantly more for the wager than buyers were willing to pay to acquire it. (...) However, we also found that probability estimates of each horse winning the race did not differ between owners and non-owners of the betting slip. Our results demonstrate that distorted perception of probability is unlikely to be a mechanism explaining the endowment effect. (shrink)
Although it has long been known that time is a cue to causation, recent work with adults has demonstrated that causality can also influence the experience of time. In causal reordering (Bechlivanidis & Lagnado, 2013, 2016) adults tend to report the causally consistent order of events, rather than the correct temporal order. However, the effect has yet to be demonstrated in children. Across four pre-registered experiments, 4- to 10-year-old children (N=813) and adults (N=178) watched a 3-object Michotte-style ‘pseudocollision’. While in (...) the canonical version of the clip object A collided with B, which then collided with object C (order: ABC), the pseudocollision involved the same spatial array of objects but featured object C moving before object B (order: ACB), with no collision between B and C. Participants were asked to judge the temporal order of events and whether object B collided with C. Across all age groups, participants were significantly more likely to judge that B collided with C in the 3-object pseudocollision than in a 2-object control clip (where clear causal direction was lacking), despite the spatiotemporal relations between B and C being identical in the two clips (Experiments 1—3). Collision judgements and temporal order judgements were not entirely consistent, with some participants—particularly in the younger age range—basing their temporal order judgements on spatial rather than temporal information (Experiment 4). We conclude that in both children and adults, rather than causal impressions being determined only by the basic spatial-temporal properties of object movement, schemata are used in a top-down manner when interpreting perceptual displays. (shrink)
During the last few decades, models have become the centre of attention in both cognitive science and philosophy of science. In cognitive science, the claim that humans reason with mental models, rather than mentally manipulate linguistic symbols, is the majority view. Similarly, philosophers of science almost unanimously acknowledge that models have to be taken as a central unit of analysis. Moreover, some philosophers of science and cognitive scientists have suggested that the cognitive hypothesis of mental models is a promising way (...) of accounting for the use of models in science. However, once the importance of models in cognition as well as in science has been acknowledged, much more needs to be said about how models enable agents to make predictions, and to understand the world. In this paper, our goal (as a cognitive scientist, working on causal reasoning, and a philosopher of science, working on models and representations) is twofold. We would like to further develop the notion of mental models, and to explore the parallels between mental models as a concept in cognitive science, and models in science. While acknowledging that the parallel move towards models in cognitive science and in philosophy of science is in the right direction, we think that: i. the notion of mental models needs to be clarified in order to serve as a useful tool, ii. the relation between the hypothesis of mental models and the use of models in science is still to be clarified. First, we will briefly recall a few points about the mental model hypothesis, on the one hand, and the model-centred view of science, on the other hand. Then, we will present our parallel criticisms, and put forward our own proposals. (shrink)
Can the phenomena of associative learning be replaced wholesale by a propositional reasoning system? Mitchell et al. make a strong case against an automatic, unconscious, and encapsulated associative system. However, their propositional account fails to distinguish inferences based on actions from those based on observation. Causal Bayes networks remedy this shortcoming, and also provide an overarching framework for both learning and reasoning. On this account, causal representations are primary, but associative learning processes are not excluded a priori.
Barbey & Sloman attribute all instances of normative base-rate usage to a rule-based system, and all instances of neglect to an associative system. As it stands, this argument is too simplistic, and indeed fails to explain either good or bad performance on the classic Medical Diagnosis problem.
In criminal trials, evidence often involves a degree of uncertainty and decision-making includes moving from the initial presumption of innocence to inference about guilt based on that evidence. The jurors’ ability to combine evidence and make accurate intuitive probabilistic judgments underpins this process. Previous research has shown that errors in probabilistic reasoning can be explained by a misalignment of the evidence presented with the intuitive causal models that people construct. This has been explored in abstract and context-free situations. However, less (...) is known about how people interpret evidence in context-rich situations such as legal cases. The present study examined participants’ intuitive probabilistic reasoning in legal contexts and assessed how people’s causal models underlie the process of belief updating in the light of new evidence. The study assessed whether participants update beliefs in line with Bayesian norms and if errors in belief updating can be explained by the causal structures underpinning the evidence integration process. The study was based on a recent case in England where a couple was accused of intentionally harming their baby but was eventually exonerated because the child’s symptoms were found to be caused by a rare blood disorder. Participants were presented with a range of evidence, one piece at a time, including physical evidence and reports from experts. Participants made probability judgments about the abuse and disorder as causes of the child’s symptoms. Subjective probability judgments were compared against Bayesian norms. The causal models constructed by participants were also elicited. Results showed that overall participants revised their beliefs appropriately in the right direction based on evidence. However, this revision was done without exact Bayesian computation and errors were observed in estimating the weight of evidence. Errors in probabilistic judgments were partly accounted for, by differences in the causal models representing the evidence. Our findings suggest that understanding causal models that guide people’s judgments may help shed light on errors made in evidence integration and potentially identify ways to address accuracy in judgment. (shrink)
Although we welcome Gigerenzer, Todd, and the ABC Research Group's shift of emphasis from “coherence” to “correspondence” criteria, their rejection of optimality in human decision making is premature: In many situations, experts can achieve near-optimal performance. Moreover, this competence does not require implausible computing power. The models Gigerenzer et al. evaluate fail to account for many of the most robust properties of human decision making, including examples of optimality.