Although widely studied in other domains, relatively little is known about the metacognitive processes that monitor and control behaviour during reasoning and decision-making. In this paper, we examined the conditions under which two fluency cues are used to monitor initial reasoning: answer fluency, or the speed with which the initial, intuitive answer is produced, and perceptual fluency, or the ease with which problems can be read. The first two experiments demonstrated that answer fluency reliably predicted Feeling of Rightness judgments to (...) conditional inferences and base rate problems, which subsequently predicted the amount of deliberate processing as measured by thinking time and answer changes; answer fluency also predicted retrospective confidence judgments. Moreover, the effect of answer fluency on reasoning was independent from the effect of perceptual fluency, establishing that these are empirically independent constructs. In five experiments with a variety of reasoning problems similar to those of Alter et al., we found no effect of perceptual fluency on FOR, retrospective confidence or accuracy; however, we did observe that participants spent more time thinking about hard to read stimuli, although this additional time did not result in answer changes. In our final two experiments, we found that perceptual disfluency increased accuracy on the CRT, but only amongst participants of high cognitive ability. As Alter et al.’s samples were gathered from prestigious universities, collectively, the data to this point suggest that perceptual fluency prompts additional processing in general, but this processing may results in higher accuracy only for the most cognitively able. (shrink)
We report an experiment investigating the “special-process” theory of insight problem solving, which claims that insight arises from non-conscious, non-reportable processes that enable problem re-structuring. We predicted that reducing opportunities for speech-based processing during insight problem solving should permit special processes to function more effectively and gain conscious awareness, thereby facilitating insight. We distracted speech-based processing by using either articulatory suppression or irrelevant speech, with findings for these conditions supporting the predicted insight facilitation effect relative to silent working or thinking (...) aloud. The latter condition was included to investigate the currently contested effect of “verbal overshadowing” on insight, whereby thinking aloud is claimed to hinder the operation of special, non-reportable processes. Whilst verbal overshadowing was not evident in final solution rates, there was nevertheless support for verbal overshadowing up to and beyond.. (shrink)
An experiment is reported examining dual-process models of belief bias in syllogistic reasoning using a problem complexity manipulation and an inspection-time method to monitor processing latencies for premises and conclusions. Endorsement rates indicated increased belief bias on complex problems, a finding that runs counter to the “belief-first” selective scrutiny model, but which is consistent with other theories, including “reasoning-first” and “parallel-process” models. Inspection-time data revealed a number of effects that, again, arbitrated against the selective scrutiny model. The most striking inspection-time (...) result was an interaction between logic and belief on premise-processing times, whereby belief - logic conflict problems promoted increased latencies relative to non-conflict problems. This finding challenges belief-first and reasoning-first models, but is directly predicted by parallel-process models, which assume that the outputs of simultaneous heuristic and analytic processing streams lead to an awareness of belief - logic conflicts than then require time-consuming resolution. (shrink)
(2013). Matching bias in syllogistic reasoning: Evidence for a dual-process account from response times and confidence ratings. Thinking & Reasoning: Vol. 19, No. 1, pp. 54-77. doi: 10.1080/13546783.2012.735622.
In recent years there has been an upsurge of research aimed at removing the mystery from insight and creative problem solving. The present special issue reflects this expanding field. Overall the papers gathered here converge on a nuanced view of insight and creative thinking as arising from multiple processes that can yield surprising solutions through a mixture of “special” Type 1 processes and “routine” Type 2 processes.
A study is reported which focused on the problem-solving strategies employed by expert electronics engineers pursuing a real-world task: integrated-circuit design. Verbal protocol data were analysed so as to reveal aspects of the organisation and sequencing of ongoing design activity. These analyses indicated that the designers were implementing a highly systematic solution-development strategy which deviated only a small degree from a normatively optimal top-down and breadth-first method. Although some of the observed deviation could be described as opportunistic in nature, much (...) of it reflected the rapid depth-first exploration of tentative solution ideas. We argue that switches from a predominantly breadth-first mode of problem solving to depth-first or opportunistic modes may be an important aspect of the expert's strategic knowledge about how to conduct the design process effectively when faced with difficulties, uncertainties, and design impasses. (shrink)
Laboratory-based studies of problem solving suggest that transfer of solution principles from an analogue to a target arises only minimally without the presence of directive hints. Recently, however, real-world studies indicate that experts frequently and spontaneously use analogies in domain-based problem solving. There is also some evidence that in certain circumstances domain novices can draw analogies designed to illustrate arguments. It is less clear, however, whether domain novices can invoke analogies in the sophisticated manner of experts to enable them to (...) progress problem solving. In the current study groups of novices and experts tackled large-scale management problems. Spontaneous analogising was observed in both conditions, with no marked differences between expertise levels in the frequency, structure, or function of analogising. On average four analogies were generated by groups per hour, with significantly more relational mappings between analogue and target being produced than superficial object-and-attribute mappings. Analogising served two different purposes: problem solving (dominated by relational mappings), and illustration (which for novices was dominated by object-and-attribute mappings). Overall, our novices showed a sophistication in domain-based analogical reasoning that is usually only observed with experts, in addition to a sensitivity to the pragmatics of analogy use. (shrink)
We applaud many aspects of Elqayam & Evans' (E&E's) call for a descriptivist research programme in studying reasoning. Nevertheless, we contend that normative benchmarks are vital for understanding individual differences in performance. We argue that the presence of normative responses to particular problems by certain individuals should inspire researchers to look for converging evidence for analytic processing that may have a normative basis.
In this reply, we provide an analysis of Alter et al. response to our earlier paper. In that paper, we reported difficulty in replicating Alter, Oppenheimer, Epley, and Eyre’s main finding, namely that a sense of disfluency produced by making stimuli difficult to perceive, increased accuracy on a variety of reasoning tasks. Alter, Oppenheimer, and Epley argue that we misunderstood the meaning of accuracy on these tasks, a claim that we reject. We argue and provide evidence that the tasks were (...) not too difficult for our populations and point out that in many cases performance on our tasks was well above chance or on a par with Alter et al.’s participants. Finally, we reiterate our claim that the distinction between answer fluency and perceptual fluency is genuine, and argue that Thompson et al. provided evidence that these are distinct factors that have different downstream effects on cognitive processes. (shrink)
Wason's standard 2-4-6 task requires discovery of a single rule and leads to around 20% solutions, whereas the dual goal (DG) version requires discovery of two rules and elevates solutions to over 60%. We report an experiment that aimed to discriminate between competing accounts of DG facilitation by manipulating the degree of complementarity between the to-be-discovered rules. Results indicated that perfect rule complementarity is not essential for task success, thereby undermining a key tenet of the goal complementarity account of DG (...) facilitation. The triple heterogeneity account received a good degree of support since more varied triple exploration was associated with facilitatory DG conditions, in line with this account's prediction that task success is associated with the creative search of the problem space. The contrast class account (an extension of Oaksford & Chater's, 1994, iterative counterfactual model) was also corroborated in that the generation of descending triples was demonstrated to be the dominant predictor of DG success. We focus our discussion on conceptual ideas relating to the way in which iterative counterfactual testing and contrast class identification may work together to provide a powerful basis for effective hypothesis testing. (shrink)
The Theory Theory (TT) versus Simulation Theory (ST) debate is primarily concerned with how we understand others’ mental states. Theory theorists claim we do this using rules that are akin to theoretical laws, whereas simulation theorists claim we use our own minds to imagine ourselves in another’s position. Theorists from both camps suggest a consideration of individuals with autism spectrum disorders (ASD) can help resolve the TT/ST debate (e.g., Baron-Cohen 1995; Carruthers 1996a; Goldman 2006). We present a three-part argument that (...) such research has so far been inconclusive and that the prospects for studies of ASD to resolve the debate in the near future remain uncertain. First, we discuss evidence indicating that some individuals with ASD can perform effectively on tests of mental state understanding, which questions what ASD can tell us regarding theorising or simulation. Second, we claim that there is compelling evidence that domain-general mechanisms are implicated in mental state reasoning, which undermines how ASD might inform the TT/ST debate given that both theories appeal to domain-specific mindreading mechanisms. Third, we suggest that neuroscientific evidence for an assumed role of the mirror neuron system in autism also fails to arbitrate between TT and ST. We suggest that while the study of ASD may eventually provide a resolution to the TT/ST debate, it is also vital for researchers to examine the issues through other avenues, for example, by examining people’s everyday counterfactual reasoning with mental state scenarios. (shrink)
Wason's standard 2-4-6 task requires discovery of a single rule and leads to around 20% solutions, whereas the dual goal (DG) version requires discovery of two rules and elevates solutions to over 60%. We report an experiment that aimed to discriminate between competing accounts of DG facilitation by manipulating the degree of complementarity between the to-be-discovered rules. Results indicated that perfect rule complementarity is not essential for task success, thereby undermining a key tenet of the goal complementarity account of DG (...) facilitation. The triple heterogeneity account received a good degree of support since more varied triple exploration was associated with facilitatory DG conditions, in line with this account's prediction that task success is associated with the creative search of the problem space. The contrast class account (an extension of Oaksford & Chater's, 1994, iterative counterfactual model) was also corroborated in that the generation of descending triples was demonstrated to be the dominant predictor of DG success. We focus our discussion on conceptual ideas relating to the way in which iterative counterfactual testing and contrast class identification may work together to provide a powerful basis for effective hypothesis testing. (shrink)
Mercier & Sperber (M&S) claim that the phenomenon of belief bias provides fundamental support for their argumentative theory and its basis in intuitive judgement. We propose that chronometric evidence necessitates a more nuanced account of belief bias that is not readily captured by argumentative theory.
People consistently act in ways that harm the environment, even when believing their actions are environmentally friendly. A case in point is a biased judgment termed the negative footprint illusion, which arises when people believe that the addition of “eco-friendly” items to conventional items, reduces the total carbon footprint of the whole item-set, whereas the carbon footprint is, in fact, increased because eco-friendly items still contribute to the overall carbon footprint. Previous research suggests this illusion is the manifestation of an (...) “averaging-bias.” We present two studies that explore whether people’s susceptibility to the negative footprint illusion is associated with individual differences in: environment-specific reasoning dispositions measured in terms of compensatory green beliefs and environmental concerns; or general analytic reasoning dispositions measured in terms of actively open-minded thinking, avoidance of impulsivity and reflective reasoning. A negative footprint illusion was demonstrated when participants rated the carbon footprint of conventional buildings combined with eco-friendly buildings and conventional cars combined with eco-friendly cars. However, the illusion was not identified in participants’ ratings of the carbon footprint of apples. In Studies 1 and 2, environment-specific dispositions were found to be unrelated to the negative footprint illusion. Regarding reflective thinking dispositions, reduced susceptibility to the negative footprint illusion was only associated with actively open-minded thinking measured on a 7-item scale and 17-item scale. Our findings provide partial support for the existence of a negative footprint illusion and reveal a role of individual variation in reflective reasoning dispositions in accounting for a limited element of differential susceptibility to this illusion. (shrink)
Ubiquitous computing is a new kind of computing where devices enhance everyday artefacts and open up previously inaccessible situations for data capture. âTechnology paternalismâ has been suggested by Spiekermann and Pallas (Poiesis & Praxis: Int J Technol Assess Ethics Sci 4(1):6â18, 2006) as a concept to gauge the social and ethical impact of these new technologies. In this article we explore this concept in the specific setting of UK road maintenance and construction. Drawing on examples from our qualitative fieldwork we (...) suggest that cultural logics such as those reflected in paternalistic health and safety discourse are central in legitimising the introduction of ubiquitous computing technologies. As such, there is little doubt that paternalism plays an essential role in peopleâs reasoning about ubiquitous computing in this setting. We argue, however, that since discourses such as health and safety are used by everyone (including both managers and workers) in the organisation to further their own aims, technologies transcend purely paternalistic conceptualisations and instead become a focal point for ongoing struggles for control between those deploying and using them. This means that the benefits and costs of such new technologies become harder to define from an ethical and social perspective. (shrink)
Despite its strengths, Leech et al.'s model fails to address the important benefits that derive from self-explanation and task feedback in analogical reasoning development. These components encourage explicit, self-reflective processes that do not necessarily link to knowledge accretion. We wonder, therefore, what mechanisms can be included within a connectionist framework to model self-reflective involvement and its beneficial consequences.
Stanovich & West's dual-system represents a major development in an understanding of reasoning and rationality. Their notion of System 1 functioning as a computational escape hatch during the processing of complex tasks may deserve a more central role in explanations of reasoning performance. We describe examples of apparent escape-hatch processing from the reasoning and judgement literature.