A study is reported testing two hypotheses about a close parallel relation between indicative conditionals, if A then B , and conditional bets, I bet you that if A then B . The first is that both the indicative conditional and the conditional bet are related to the conditional probability, P(B|A). The second is that de Finetti's three-valued truth table has psychological reality for both types of conditional— true , false , or void for indicative conditionals and win , lose (...) , or void for conditional bets. The participants were presented with an array of chips in two different colours and two different shapes, and an indicative conditional or a conditional bet about a random chip. They had to make judgements in two conditions: either about the chances of making the indicative conditional true or false or about the chances of winning or losing the conditional bet. The observed distributions of responses in the two conditions were generally related to the conditional probability, supporting the first hypothesis. In addition, a majority of participants in further conditions chose the third option, “void”, when the antecedent of the conditional was false, supporting the second hypothesis. (shrink)
The new paradigm in the psychology of reasoning adopts a Bayesian, or prob- abilistic, model for studying human reasoning. Contrary to the traditional binary approach based on truth functional logic, with its binary values of truth and falsity, a third value that represents uncertainty can be introduced in the new paradigm. A variety of three-valued truth table systems are available in the formal literature, including one proposed by de Finetti. We examine the descriptive adequacy of these systems for natural language (...) indicative condi- tionals and bets on conditionals. Within our framework the so-called “defective” truth table, in which participants choose a third value when the antecedent of the indicative conditional is false, becomes a coherent response. We show that only de Finetti’s system has a good descriptive fit when uncer- tainty is the third value. (shrink)
Psychological research on people’s understanding of natural language connectives has traditionally used truth table tasks, in which participants evaluate the truth or falsity of a compound sentence given the truth or falsity of its components in the framework of propositional logic. One perplexing result concerned the indicative conditional if A then C which was often evaluated as true when A and C are true, false when A is true and C is false but irrelevant“ (devoid of value) when A is (...) false (whatever the value of C). This was called the “psychological defective table of the conditional.” Here we show that far from being anomalous the “defective” table pattern reveals a coherent semantics for the basic connectives of natural language in a trivalent framework. This was done by establishing participants’ truth tables for negation, conjunction, disjunction, conditional, and biconditional, when they were presented with statements that could be certainly true, certainly false, or neither. We review systems of three-valued tables from logic, linguistics, foundations of quantum mechanics, philosophical logic, and artificial intelligence, to see whether one of these systems adequately describes people’s interpretations of natural language connectives. We find that de Finetti’s (1936/1995) three-valued system is the best approximation to participants’ truth tables. (shrink)
The Bayesian model has been used in psychology as the standard reference for the study of probability revision. In the first part of this paper we show that this traditional choice restricts the scope of the experimental investigation of revision to a stable universe. This is the case of a situation that, technically, is known as focusing. We argue that it is essential for a better understanding of human probability revision to consider another situation called updating (Katsuno & Mendelzon, 1992), (...) in which the universe is evolving. In that case the structure of the universe has definitely been transformed and the revision message conveys information on the resulting universe. The second part of the paper presents four experiments based on the Monty Hall puzzle that aim to show that updating is a natural frame for individuals to revise their beliefs. (shrink)
ABSTRACTThe new paradigm in the psychology of reasoning redirects the investigation of deduction conceptually and methodologically because the premises and the conclusion of the inferences are assumed to be uncertain. A probabilistic counterpart of the concept of logical validity and a method to assess whether individuals comply with it must be defined. Conceptually, we used de Finetti's coherence as a normative framework to assess individuals' performance. Methodologically, we presented inference schemas whose premises had various levels of probability that contained non-numerical (...) expressions and, as a control, sure levels. Depending on the inference schemas, from 60% to 80% of the participants produced coherent conclusions when the premises were uncertain. The data also show that except for schemas involving conjunction, performance was consistently lower with certain than uncertain premises, the rate of conjunction fallacy was consistently low (not exceeding 20%,.. (shrink)
This paper aims to make explicit the methodological conditions that should be satisfied for the Bayesian model to be used as a normative model of human probability judgment. After noticing the lack of a clear definition of Bayesianism in the psychological literature and the lack of justification for using it, a classic definition of subjective Bayesianism is recalled, based on the following three criteria: an epistemic criterion, a static coherence criterion and a dynamic coherence criterion. Then it is shown that (...) the adoption of this framework has two kinds of implications. The first one regards the methodology of the experimental study of probability judgment. The Bayesian framework creates pragmatic constraints on the methodology that are linked to the interpretation of, and the belief in, the information presented, or referred to, by an experimenter in order for it to be the basis of a probability judgment by individual participants. It is shown that these constraints have not been satisfied in the past, and the question of whether they can be satisfied in principle is raised and answered negatively. The second kind of implications consists of two limitations in the scope of the Bayesian model. They regard (1) the background of revision (the Bayesian model considers only revising situations but not updating situations), and (2) the notorious case of the null priors. In both cases Lewisâ rule is an appropriate alternative to Bayesâ rule, but its use faces the same operational difficulties. (shrink)
This paper reviews the psychological investigation of reasoning with conditionals, putting an emphasis on recent work. In the first part, a few methodological remarks are presented. In the second part, the main theories of deductive reasoning (mental rules, mental models, and the probabilistic approach) are considered in turn; their content is summarised and the semantics they assume for if and the way they explain formal conditional reasoning are discussed, in particular in the light of experimental work on the probability of (...) conditionals. The last part presents the recent shift of interest towards the study of conditional reasoning in context, that is, with large knowledge bases and uncertain premises. (shrink)
The explanation of the suppression of Modus Ponens inferences within the framework of linguistic pragmatics and of plausible reasoning (i.e., deduction from uncertain premises) is defended. First, this approach is expounded, and then it is shown that the results of the first experiment of Byrne, Espino, and Santamar a (1999) support the uncertainty explanation but fail to support their counterexample explanation. Second, two experiments are presented. In the first one, aimed to refute one objection regarding the conclusions observed, the additional (...) conditional premise ( if N, C ) was replaced with a statement of uncertainty ( it is not certain that N ); the answers produced by the participants remained qualitatively and quantitatively similar in both conditions. In the second experiment, a fine-grained analysis of the responses to and justifications for an evaluation task was performed. The results of both experiments strongly supported the uncertainty explanation. (shrink)
This paper describes a cubic water tank equipped with a movable partition receiving various amounts of liquid used to represent joint probability distributions. This device is applied to the investigation of deductive inferences under uncertainty. The analogy is exploited to determine by qualitative reasoning the limits in probability of the conclusion of twenty basic deductive arguments (such as Modus Ponens, And-introduction, Contraposition, etc.) often used as benchmark problems by the various theoretical approaches to reasoning under uncertainty. The probability bounds imposed (...) by the premises on the conclusion are derived on the basis of a few trivial principles such as "a part of the tank cannot contain more liquid than its capacity allows", or "if a part is empty, the other part contains all the liquid". This stems from the equivalence between the physical constraints imposed by the capacity of the tank and its subdivisions on the volumes of liquid, and the axioms and rules of probability. The device materializes de Finetti's coherence approach to probability. It also suggests a physical counterpart of Dutch book arguments to assess individuals' rationality in probability judgments in the sense that individuals whose degrees of belief in a conclusion are out of the bounds of coherence intervals would commit themselves to executing physically impossible tasks. (shrink)
Language pragmatics is applied to analyse problem statements and instructions used in a few influential experimental tasks in the psychology of reasoning. This analysis aims to determine the interpretation of the task which the participant is likely to construct. It is applied to studies of deduction (where the interpretation of quantifiers and connectives is crucial) and to studies of inclusion judgment and probabilistic judgment. It is shown that the interpretation of the problem statements or even the representation of the task (...) as a whole often turn out to differ from the experimenter's assumptions. This has serious consequences for the validity of these experimental results and therefore for the claims about human irrationality based on them. (shrink)
Most instantiations of the inference ‘y; so if x, y’ seem intuitively odd, a phenomenon known as one of the paradoxes of the material conditional. A common explanation of the oddity, endorsed by Mental Model theory, is based on the intuition that the conclusion of the inference throws away semantic information. We build on this explanation to identify two joint conditions under which the inference becomes acceptable: (a) the truth of x has bearings on the relevance of asserting y; and (...) (b) the speaker can reasonably be expected not to be in a position to assume that x is false. We show that this dual pragmatic criterion makes accurate predictions, and contrast it with the criterion defined by the mental model theory of conditionals, which we show to be inadequate. (shrink)
When a new piece of information contradicts a currently held belief, one has to modify the set of beliefs in order to restore its consistency. In the case where it is necessary to give up a belief, some of them are less likely to be abandoned than others. The concept of epistemic entrenchment is used by some AI approaches to explain this fact based on formal properties of the belief set (e.g., transitivity). Two experiments were designed to test the hypothesis (...) that contrary to such views, (i) belief is naturally represented by degrees rather than in an all-or-nothing manner, (ii) entrenchment is primarily a matter of content and not only a matter of form, and (iii) consequently prior degree of belief is a powerful factor of change. The two experiments used Elio and Pelletier's (1997) paradigm in which participants were presented with full simple deductive arguments whose conclusion was denied, following which they were asked to decide which premise to revise. (shrink)
The Bayesian model is used in psychology as the reference for the study of dynamic probability judgment. The main limit induced by this model is that it confines the study of revision of degrees of belief to the sole situations of revision in which the universe is static (revising situations). However, it may happen that individuals have to revise their degrees of belief when the message they learn specifies a change of direction in the universe, which is considered as changing (...) with time (updating situations). We analyze the main results of the experimental literature with regard to elementary qualitative properties of these two situations of revision. First, the order effect phenomenon is confronted with the commutative property. Second, an apparent new phenomenon is presented: the redundancy effect that is confronted with the idempotence property. Finally, results obtained in this kind of experimental situations are reinterpreted in the light of pragmatic analysis. (shrink)
Two notions from philosophical logic and linguistics are brought together and applied to the psychological study of defeasible conditional reasoning. The distinction between disabling conditions and alternative causes is shown to be a special case of Pollock's (1987) distinction between 'rebutting' and 'undercutting' defeaters. 'Inferential' conditionals are shown to come in two varieties, one that is sensitive to rebutters, the other to undercutters. It is thus predicted and demonstrated in two experiments that the type of inferential conditional used as the (...) major premise of conditional arguments can reverse the heretofore classic, distinctive effects of defeaters. (shrink)
Two notions from philosophical logic and linguistics are brought together and applied to the psychological study of defeasible conditional reasoning. The distinction between disabling conditions and alternative causes is shown to be a special case of Pollock's (1987) distinction between ‘rebutting' and ‘undercutting' defeaters. ‘Inferential' conditionals are shown to come in two types, one that is sensitive to rebutters, the other to undercutters. It is thus predicted and demonstrated in two experiments that the type of inferential conditional used as the (...) major premise of conditional arguments can reverse the heretofore classic, distinctive effects of defeaters. (shrink)
We elaborate on the approach to syllogistic reasoning based on “case identification” (Stenning & Oberlander, 1995; Stenning & Yule, 1997). It is shown that this can be viewed as the formalisation of a method of proof that dates back to Aristotle, namely proof by exposition ( ecthesis ), and that there are traces of this method in the strategies described by a number of psychologists, from St rring (1908) to the present day. We hypothesised that by rendering individual cases explicit (...) in the premises, the chance that reasoners would engage in a proof by exposition would be enhanced, and thus performance improved. To do so, we used syllogisms with singular premises (e.g., this X is Y ). This resulted in a uniform increase in performance as compared to performance on the associated standard syllogisms. These results cannot be explained by the main theories of syllogistic reasoning in their current state. (shrink)
When is a conclusion worth deriving? We claim that a conclusion is worth deriving to the extent that it is relevant in the sense of relevance theory (Sperber & Wilson, 1995). To support this hypothesis, we experiment with ''indeterminate relational problems'' where we ask participants what, if anything, follows from premises such as A is taller than B, A is taller than C . With such problems, the indeterminate response that nothing follows is common, and we explain why. We distinguish (...) several types of determinate conclusions and show that their rate is a function of their relevance. We argue that by appropriately changing the formulation of the premises, the relevance of determinate conclusions can be increased, and the rate of indeterminate responses thereby reduced. We contrast these relevance-based predictions with predictions based on linguistic congruence. (shrink)
I take up the four issues considered by Johnson -Laird, Byrne and Girotto in their reply to Politzer. Based on the conceptual clarification which they adduce, it seems that the disagreement can be settled about the first one and can be attenuated about the second one. However, I maintain and refine my criticisms on the last two, backed up by considerations borrowed from the perspective of the conditional probability semantics for conditionals.
Although we endorse the primacy of uncertainty in reasoning, we argue that a probabilistic framework cannot model the fundamental skill of proof administration. Furthermore, we are skeptical about the assumption that standard probability calculus is the appropriate formalism to represent human uncertainty. There are other models up to this task, so let us not repeat the excesses of the past.
Natural syllogisms are expressed in terms of classes and properties of the real world. They exploit a categorisation present in semantic memory that provides a class inclusion structure. they are enthymematic and typically occur within a dialogue. Their form is identical to a formal syllogism once the minor premise is made explicit. It is claimed that reasoners routinely execute natural_syllogisms in an effortless manner based on ecthesis, which is primed by the class inclusion structure kept in long term memory.
We present a set-theoretic model of the mental representation of classically quantified sentences (All P are Q, Some P are Q, Some P are not Q, and No P are Q). We take inclusion, exclusion, and their negations to be primitive concepts. We show that although these sentences are known to have a diagrammatic expres- sion (in the form of the Gergonne circles) that constitutes a semantic representation, these concepts can also be expressed syntactically in the form of algebraic formulas. (...) We hypothesized that the quantified sen- tences have an abstract underlying representation common to the formulas and their associated sets of dia- grams (models). We derived 9 predictions (3 semantic, 2 pragmatic, and 4 mixed) regarding people’s as- sessment of how well each of the 5 diagrams expresses the meaning of each of the quantified sentences. We report the results from 3 experiments using Gergonne’s (1817) circles or an adaptation of Leibniz (1903/ 1988) lines as external representations and show them to support the predictions. (shrink)
It is argued that, in the traditional subject-predicate sentence, two interpretations of the subject term coexist, one intensional and the other extensional, which explains the superficial difference between the traditional S-P relation and the predication of predicate logic. Data from psychological studies of syllogistic reasoning support the view that the contrast between predicate and argument is carried over to the traditional S-P sentence.
“Natural syllogisms” are arguments formally identifiable with categorical syllogisms that have an implicit universal affirmative premise retrieved from semantic memory rather than explicitly stated. Previous studies with adult participants have shown that the rate of success is remarkably high. Because their resolution requires only the use of a simple strategy and an operational use of the concept of inclusion, it was hypothesized that these syllogisms would be within the grasp of non-adult participants, provided they have acquired the notion of deductive (...) validity. Here, 11-year-old children were presented with natural syllogisms embedded in short dialogs. The first experiment showed that their performance was equivalent to adults' highest level of performance in standard experiments on syllogisms. The second experiment, while confirming children's proficiency in solving natural syllogisms, showed that they outperformed children who solved non-natural matched syllogisms in the same experimental setting. The results are also in agreement with the argumentation theory of reasoning. (shrink)
The Pigeonhole Principle states that if n items are sorted into m categories and if n > m, then at least one category must contain more than one item. For instance, if 22 pigeons are put into 17 pigeonholes, at least one pigeonhole must contain more than one pigeon. This principle seems intuitive, yet when told about a city with 220,000 inhabitants none of whom has more than 170,000 hairs on their head, many people think that it is merely likely (...) that two inhabitants have the exact same number of hair. This failure to apply the Pigeonhole Principle might be due to the large numbers used, or to the cardinal rather than nominal presentation of these numbers. We show that performance improved both when the numbers are presented nominally, and when they are small, albeit less so. We discuss potential interpretations of these results in terms of intuition and reasoning. (shrink)
We present a set-theoretic model of the mental representation of classically quantified sentences (All P are Q, Some P are Q, Some P are not Q, and No P are Q). We take inclusion, exclusion, and their negations to be primitive concepts. It is shown that, although these sentences are known to have a diagrammatic expression (in the form of the Gergonne circles) which constitute a semantic representation, these concepts can also be expressed syntactically in the form of algebraic formulas. (...) It is hypothesized that the quantified sentences have an abstract underlying representation common to the formulas and their associated sets of diagrams (models). Nine predictions are derived (three semantic, two pragmatic, and four mixed) regarding people's assessment of how well each of the five diagrams expresses the meaning of each of the quantified sentences. The results from three experiments, using Gergonne's circles or an adaptation of Leibniz lines as external representations, are reported and shown to support the predictions. (shrink)