Epistemologists and philosophers of science have often attempted to express formally the impact of a piece of evidence on the credibility of a hypothesis. In this paper we will focus on the Bayesian approach to evidential support. We will propose a new formal treatment of the notion of degree of confirmation and we will argue that it overcomes some limitations of the currently available approaches on two grounds: (i) a theoretical analysis of the confirmation relation seen as an extension of (...) logical deduction and (ii) an empirical comparison of competing measures in an experimental inquiry concerning inductive reasoning in a probabilistic setting. (shrink)
The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt to provide a satisfactory account of the phenomenon has proved challenging. Here we elaborate the suggestion (first discussed by Sides, Osherson, Bonini, & Viale, 2002) that in standard conjunction problems the fallacious probability judgements observed experimentally are typically guided by sound assessments of _confirmation_ relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal (...) result is a confirmation-theoretic account of the conjunction fallacy, which is proven _robust_ (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy, and is compared with major competing explanations of the phenomenon, including earlier references to a confirmation-theoretic account. (shrink)
Probability ratio and likelihood ratio measures of inductive support and related notions have appeared as theoretical tools for probabilistic approaches in the philosophy of science, the psychology of reasoning, and artificial intelligence. In an effort of conceptual clarification, several authors have pursued axiomatic foundations for these two families of measures. Such results have been criticized, however, as relying on unduly demanding or poorly motivated mathematical assumptions. We provide two novel theorems showing that probability ratio and likelihood ratio measures can be (...) axiomatized in a way that overcomes these difficulties. (shrink)
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the (...) reduction thereof. However, a variety of alternative entropy metrics are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism. (shrink)
We discuss the probabilistic analysis of explanatory power and prove a representation theorem for posterior ratio measures recently advocated by Schupbach and Sprenger. We then prove a representation theorem for an alternative class of measures that rely on the notion of relative probability distance. We end up endorsing the latter, as relative distance measures share the properties of posterior ratio measures that are genuinely appealing, while overcoming a feature that we consider undesirable. They also yield a telling result concerning formal (...) accounts of explanatory power versus inductive confirmation, thereby bridging our discussion to a so-called no-miracle argument. (shrink)
The so‐called problem of irrelevant conjunction has been seen as a serious challenge for theories of confirmation. It involves the consequences of conjoining irrelevant statements to a hypothesis that is confirmed by some piece of evidence. Following Hawthorne and Fitelson, we reconstruct the problem with reference to Bayesian confirmation theory. Then we extend it to the case of conjoining irrelevant statements to a hypothesis that is dis confirmed by some piece of evidence. As a consequence, we obtain and formally present (...) a novel and more troublesome problem of irrelevant conjunction. We conclude by indicating a possible solution based on a measure‐sensitive approach and by critically discussing a major alternative way to address the problem. *Received December 2008; revised August 2009. †To contact the authors, please write to: Department of Philosophy, University of Turin, via Sant'Ottavio 20, 10124 Turin, Italy; e‐mail: [email protected] ; [email protected] or [email protected] (shrink)
Crupi et al. (Think Reason 14:182–199, 2008) have recently advocated and partially worked out an account of the conjunction fallacy phenomenon based on the Bayesian notion of confirmation. In response, Schupbach (2009) presented a critical discussion as following from some novel experimental results. After providing a brief restatement and clarification of the meaning and scope of our original proposal, we will outline Schupbach’s results and discuss his interpretation thereof arguing that they do not actually undermine our point of view if (...) properly construed. Finally, we will foster such a claim by means of some novel data. (shrink)
Because the conjunction pandq implies p, the value of a bet on pandq cannot exceed the value of a bet on p at the same stakes. We tested recognition of this principle in a betting paradigm that (a) discouraged misreading p as pandnotq, and (b) encouraged genuinely conjunctive reading of pandq. Frequent violations were nonetheless observed. The findings appear to discredit the idea that most people spontaneously integrate the logic of conjunction into their assessments of chance.
Inductive reasoning requires exploiting links between evidence and hypotheses. This can be done focusing either on the posterior probability of the hypothesis when updated on the new evidence or on the impact of the new evidence on the credibility of the hypothesis. But are these two cognitive representations equally reliable? This study investigates this question by comparing probability and impact judgments on the same experimental materials. The results indicate that impact judgments are more consistent in time and more accurate than (...) probability judgments. Impact judgments also predict the direction of errors in probability judgments. These findings suggest that human inductive reasoning relies more on estimating evidential impact than on posterior probability. (shrink)
According to Kanazawa (Psychol Rev 111:512â523, 2004), general intelligence, which he considers as a synonym of abstract thinking, evolved specifically to allow our ancestors to deal with evolutionary novel problems while conferring no advantage in solving evolutionary familiar ones. We present a study whereby the results contradict Kanazawaâs hypothesis by demonstrating that performance on an evolutionary novel problem (an abstract reasoning task) predicts performance on an evolutionary familiar problem (a social reasoning task).
Causal knowledge is not static; it is constantly modified based on new evidence. The present set of seven experiments explores 1 important case of causal belief revision that has been neglected in research so far: causal interpolations. A simple prototypic case of an interpolation is a situation in which we initially have knowledge about a causal relation or a positive covariation between 2 variables but later become interested in the mechanism linking these 2 variables. Our key finding is that the (...) interpolation of mechanism variables tends to be misrepresented, which leads to the paradox of knowing more: The more people know about a mechanism, the weaker they tend to find the probabilistic relation between the 2 variables (i.e., weakening effect). Indeed, in all our experiments we found that, despite identical learning data about 2 variables, the probability linking the 2 variables was judged higher when follow-up research showed that the 2 variables were assumed to be directly causally linked (i.e., C→E) than when participants were instructed that the causal relation is in fact mediated by a variable representing a component of the mechanism (M; i.e., C→M→E). Our explanation of the weakening effect is that people often confuse discoveries of preexisting but unknown mechanisms with situations in which new variables are being added to a previously simpler causal model, thus violating causal stability assumptions in natural kind domains. The experiments test several implications of this hypothesis. (shrink)
In a series of three behavioral experiments, we found a systematic distortion of probability judgments concerning elementary visual stimuli. Participants were briefly shown a set of figures that had two features (e.g., a geometric shape and a color) with two possible values each (e.g., triangle or circle and black or white). A figure was then drawn, and participants were informed about the value of one of its features (e.g., that the figure was a “circle”) and had to predict the value (...) of the other feature (e.g., whether the figure was “black” or “white”). We repeated this procedure for various sets of figures and, by varying the statistical association between features in the sets, we manipulated the probability of a feature given the evidence of another (e.g., the posterior probability of hypothesis “black” given the evidence “circle”) as well as the support provided by a feature to another (e.g., the impact, or confirmation, of evidence “circle” on the hypothesis “black”). Results indicated that participants’ judgments were deeply affected by impact, although they only should have depended on the probability distributions over the features, and that the dissociation between evidential impact and posterior probability increased the number of errors. The implications of these findings for lower and higher level cognitive models are discussed. (shrink)
The European market has faced a series of recurrent food scares, e.g. mad cow disease, chicken flu, dioxin poisoning in chickens, salmons and recently also in pigs (Italian newspaper Corriere della Sera , 07/12/2008). These food scares have had, in the short term, major socio-economic consequences, eroding consumer confidence and decreasing the willingness to buy potentially risky food products. The research reported in this paper considered the role of commitment to a food product in the context of food scares, and (...) in particular the effect of commitment on the purchasing intentions of consumers, on their attitude towards the product, and on their trust in the food supply chain. After the initial commitment had been obtained, a threat scenario evoking a risk associated with a specific food was presented, and a wider, related request was then made. Finally, a questionnaire tested the effects of commitment on the participantsâ attitude towards the product. The results showed that previous commitment can increase consumersâ behavioural intention to purchase and their attitude towards the food product, even in the presence of a potential hazard. (shrink)