Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.
The paper focuses on extending to the first order case the semantical program for modalities first introduced by Dana Scott and Richard Montague. We focus on the study of neighborhood frames with constant domains and we offer in the first part of the paper a series of new completeness results for salient classical systems of first order modal logic. Among other results we show that it is possible to prove strong completeness results for normal systems without the Barcan Formula (like (...) FOL + K)in terms of neighborhood frames with constant domains. The first order models we present permit the study of many epistemic modalities recently proposed in computer science as well as the development of adequate models for monadic operators of high probability. Models of this type are either difficult of impossible to build in terms of relational Kripkean semantics [40].We conclude by introducing general first order neighborhood frames with constant domains and we offer a general completeness result for the entire family of classical first order modal systems in terms of them, circumventing some well-known problems of propositional and first order neighborhood semantics (mainly the fact that many classical modal logics are incomplete with respect to an unmodified version of either neighborhood or relational frames). We argue that the semantical program that thus arises offers the first complete semantic unification of the family of classical first order modal logics. (shrink)
It is now well known that, on pain of triviality, the probability of a conditional cannot be identified with the corresponding conditional probability [25]. This surprising impossibility result has a qualitative counterpart. In fact, Peter Gärdenfors showed in [13] that believing ‘If A then B’ cannot be equated with the act of believing B on the supposition that A — as long as supposing obeys minimal Bayesian constraints. Recent work has shown that in spite of these negative results, the question (...) ‘how to accept a conditional?’ has a clear answer. Even if conditionals are not truth-carriers, they do have precise acceptability conditions. Nevertheless most epistemic models of conditionals do not provide acceptance conditions for iterated conditionals. One of the main goals of this essay is to provide a comprehensive account of the notion of epistemic conditionality covering all forms of iteration. First we propose an account of the basic idea of epistemic conditionality, by studying the conditionals validated by epistemic models where iteration is permitted but not constrained by special axioms. Our modeling does not presuppose that epistemic states should be represented by belief sets (we only assume that to each epistemic state corresponds an associated belief state). A full encoding of the basic epistemic conditionals (encompassing all forms of iteration) is presented and a representation result is proved. In the second part of the essay we argue that the notion of change involved in the evaluation of conditionals is suppositional, and that such notion should be distinguished from the notion of updating (modelled by AGM and other methods). We conclude by considering how some of the recent modellings of iterated change fare as methods for iterated supposing. (shrink)
This article elaborates on foundational issues in the social sciences and their impact on the contemporary theory of belief revision. Recent work in the foundations of economics has focused on the role external social norms play in choice. Amartya Sen has argued in [Sen93] that the traditional rationalizability approach used in the theory of rational choice has serious problems accommodating the role of social norms. Sen's more recent work [Sen96, Sen97] proposes how one might represent social norms in the theory (...) of choice, and in a very recent article [BS07] Walter Bossert and Kotaro Suzumura develop Sen's proposal, offering an extension of the classical theory of choice that is capable of dealing with social norms. The first part of this article offers an alternative functional characterization of the extended notion of rationality employed by Bossert and Suzumura in [BS07]. This characterization, unlike the one offered in [BS07], represents a norm-sensitive notion of rationality in terms of a pure functional constraint unmediated by a notion of revealed preference (something that is crucial for the application developed in the second part of this article). This functional characterization is formulated for general domains (as is Bossert and Suzumura's characterization) and is therefore empirically more applicable than usual characterizations of rationality. Interestingly, the functional constraint we propose is a variant of a condition first entertained in [AGM85] by Carlos Alchourrón, Peter Gärdenfors and David Makinson in the area of belief change. The second part of this article applies the theory developed in the first part to the realm of belief change. We first point out that social norms can be invoked to concoct counterexamples against some postulates of belief change (like postulate (*7)) that are necessary for belief change to be relational. These examples constitute the epistemological counterpart of Sen's counterexamples against condition α in rational choice (as a matter of fact, Rott has showed in [Rot01] that condition and postulate (*7) are mutually mappable). These examples are variants of examples Rott has recently presented in [Rot04]. One of our main goals in this article consists in applying the theory developed in the first part to develop a theory of norm-inclusive belief change that circumvents the counterexamples. We offer a new axiomatization for belief change and we furnish correspondence results relating constraints of rational choice to postulates of belief change. (shrink)
Gerd Gigerenzer and Thomas Sturm have recently proposed a modest form of what they describe as a normative, ecological and limited naturalism. The basic move in their argument is to infer that certain heuristics we tend to use should be used in the right ecological setting. To address this argument, we first consider the case of a concrete heuristic called Take the Best (TTB). There are at least two variants of the heuristic which we study by making explicit the choice (...) functions they induce, extending these variants of TTB beyond binary choice. We argue that the naturalistic argument can be applied to only one of the two variants of the heuristic; we also argue that the argument for the extension requires paying attention to other “rational” virtues of heuristics aside from efficacy, speed, and frugality. This notwithstanding, we show that there is a way of extending the right variant of TTB to obtain a very well behaved heuristic that could be used to offer a stronger case for the naturalistic argument (in the sense that if this heuristic is used, it is also a heuristic that we should use). The second part of the article considers attempts to extending the naturalistic argument from algorithms dealing with inference to heuristics dealing with choice. Our focus is the so-called Priority Heuristic, which we extend from risk to uncertainty. In this setting, the naturalist argument seems more difficult to formulate, if it remains feasible at all. Normativity seems in this case extrinsic to the heuristic, whose main virtue seems to be its ability to describe actual patterns of choice. But it seems that a new version of the naturalistic argument used with partial success in the case of inference is unavailable to solve the normative problem of whether we should exhibit the patterns of choice that we actually display. (shrink)
We present a decision-theoretically motivated notion of contraction which, we claim, encodes the principles of minimal change and entrenchment. Contraction is seen as an operation whose goal is to minimize loses of informational value. The operation is also compatible with the principle that in contracting A one should preserve the sentences better entrenched than A (when the belief set contains A). Even when the principle of minimal change and the latter motivation for entrenchment figure prominently among the basic intuitions in (...) the works of, among others, Quine and Ullian (1978), Levi (1980, 1991), Harman (1988) and Gärdenfors (1988), formal accounts of belief change (AGM, KM – see Gärdenfors (1988); Katsuno and Mendelzon (1991)) have abandoned both principles (see Rott (2000)). We argue for the principles and we show how to construct a contraction operation, which obeys both. An axiom system is proposed. We also prove that the decision-theoretic notion of contraction can be completely characterized in terms of the given axioms. Proving this type of completeness result is a well-known open problem in the field, whose solution requires employing both decision-theoretical techniques and logical methods recently used in belief change. (shrink)
Daniel Ellsberg presented in Ellsberg (The Quarterly Journal of Economics 75:643–669, 1961) various examples questioning the thesis that decision making under uncertainty can be reduced to decision making under risk. These examples constitute one of the main challenges to the received view on the foundations of decision theory offered by Leonard Savage in Savage (1972). Craig Fox and Amos Tversky have, nevertheless, offered an indirect defense of Savage. They provided in Fox and Tversky (1995) an explanation of Ellsberg’s two-color problem (...) in terms of a psychological effect: ambiguity aversion . The ‘comparative ignorance’ hypothesis articulates how this effect works and explains why it is important to an understanding of the typical pattern of responses associated with Ellsberg’s two-color problem. In the first part of this article we challenge Fox and Tversky’s explanation. We present first an experiment that extends Ellsberg’s two-color problem where certain predictions of the comparative ignorance hypothesis are not confirmed. In addition the hypothesis seems unable to explain how the subjects resolve trade-offs between security and expected pay-off when vagueness is present. Ellsberg offered an explanation of the typical behavior elicited by his examples in terms of these trade-offs and in section three we offer a model of Ellsberg’s trade-offs. The model takes seriously the role of imprecise probabilities in explaining Ellsberg’s phenomenon. The so-called three-color problem was also considered in Fox and Tversky (1995). We argue that Fox and Tversky’s analysis of this case breaks a symmetry with their analysis of the two-color problem. We propose a unified treatment of both problems and we present a experiment that confirms our hypothesis. (shrink)
Following the pioneer work of Bruno De Finetti [12], conditional probability spaces (allowing for conditioning with events of measure zero) have been studied since (at least) the 1950's. Perhaps the most salient axiomatizations are Karl Popper's in [31], and Alfred Renyi's in [33]. Nonstandard probability spaces [34] are a well know alternative to this approach. Vann McGee proposed in [30] a result relating both approaches by showing that the standard values of infinitesimal probability functions are representable as Popper functions, and (...) that every Popper function is representable in terms of the standard real values of some infinitesimal measure. Our main goal in this article is to study the constraints on (qualitative and probabilistic) change imposed by an extended version of McGee's result. We focus on an extension capable of allowing for iterated changes of view. Such extension, we argue, seems to be needed in almost all considered applications. Since most of the available axiomatizations stipulate (definitionally) important constraints on iterated change, we propose a non-questionbegging framework, Iterative Probability Systems (IPS) and we show that every Popper function can be regarded as a Bayesian IPS. A generalized version of McGee's result is then proved and several of its consequences considered. In particular we note that our proof requires the imposition of Cumulativity, i.e. the principle that a proposition that is accepted at any stage of an iterative process of acceptance will continue to be accepted at any later stage. The plausibility and range of applicability of Cumulativity is then studied. In particular we appeal to a method for defining belief from conditional probability (first proposed in [42] and then slightly modified in [6] and [3]) in order to characterize the notion of qualitative change induced by Cumulative models of probability kinematics. The resulting cumulative notion is then compared with existing axiomatizations of belief change and probabilistic supposition. We also consider applications in the probabilistic accounts of conditionals [1] and [30]. (shrink)
Let L be a language containing the modal operator B - for full belief. An information model is a set E of stable L-theories. A sentence is valid if it is accepted in all theories of every model.
Building on work that we reported at ISIPTA 2005 we revisit claims made by Fox and Tversky concerning their "comparative ignorance" hypothesis for decision making under uncertainty.
This special issue presents a series of articles focusing on recent work in formal epistemology and formal philosophy. The articles in the latter category elaborate on the notion of context and content and their relationships. This work is not unrelated to recent developments in formal epistemology. Logical models of context, when connected with the representation of epistemic context, are clearly relevant for many issues considered by formal epistemologists. For example, the semantic framework Joe Halpern uses in his article for this (...) issue has been applied elsewhere to solve problems in interactive epistemology. (shrink)
The paper focuses on extending to the first order case the semantical program for modalities first introduced by Dana Scott and Richard Montague. We focus on the study of neighborhood frames with constant domains and we offer in the first part of the paper a series of new completeness results for salient classical systems of first order modal logic. Among other results we show that it is possible to prove strong completeness results for normal systems without the Barcan Formula in (...) terms of neighborhood frames with constant domains. The first order models we present permit the study of many epistemic modalities recently proposed in computer science as well as the development of adequate models for monadic operators of high probability. Models of this type are either difficult of impossible to build in terms of relational Kripkean semantics [40].We conclude by introducing general first order neighborhood frames with constant domains and we offer a general completeness result for the entire family of classical first order modal systems in terms of them, circumventing some well-known problems of propositional and first order neighborhood semantics. We argue that the semantical program that thus arises offers the first complete semantic unification of the family of classical first order modal logics. (shrink)
It is now well known that, on pain of triviality, the probability of a conditional cannot be identified with the corresponding conditional probability [27]. This surprising impossibility result has a qualitative counterpart. In fact, Peter Gardenfors showed in [13] that believing 'If A then B' cannot be equated with the act of believing B on the supposition that A.
The paper provides a framework for representing belief-contravening hypotheses in games of perfect information. The resulting t-extended information structures are used to encode the notion that a player has the disposition to behave rationally at a node. We show that there are models where the condition of all players possessing this disposition at all nodes (under their control) is both a necessary and a sufficient for them to play the backward induction solution in centipede games. To obtain this result, we (...) do not need to assume that rationality is commonly known (as is done in [Aumann (1995)]) or commonly hypothesized by the players (as done in [Samet (1996)]). The proposed model is compared with the account of hypothetical knowledge presented by Samet in [Samet (1996)] and with other possible strategies for extending information structures with conditional propositions. (shrink)
Recent work has shown that in spite of these negative results, the question 'how to accept a conditional?' has a clear answer. Even if conditionals are not truth-carriers, they do have precise acceptability conditions. Nevertheless most epistemic models of conditionals do not provide acceptance conditions for iterated conditionals. One of the main goals of this essay is to provide a comprehensive account of the notion of epistemic conditionality covering all forms of iteration.
In (Hertwig et al. , 2003) Hertwig et al. draw a distinction between decisions from experience and decisions from description. In a decision from experience an agent does not have a summary description of the possible outcomes or their likelihoods. A career choice, deciding whether to back up a computer hard drive, cross a busy street, etc., are typical examples of decisions from experience. In such decisions agents can rely only of their encounters with the corresponding prospects. By contrast, an (...) agent furnished with information sources such as drug-package inserts or mutual-fund brochures—all of which describe risky prospects—will often make decisions from description. In (Hertwig et al. , 2003) it is shown (empirically) that decisions from experience and decisions from description can lead to dramatically different choice behavior. Most of these results (summarized and analyzed in (Hertwig, 2009)) are concerned with the role of risk in decision making. This article presents some preliminary results concerning the role of uncertainty in decision-making. We focus on Ellsberg’s two-color problem and consider a chance setup based on double sampling. We report empirical results which indicate that decisions from description where subjects select between a clear urn, the chance setup based on double sampling and Ellsberg’s vague urn, are such that subjects perceive the chance setup at least as an intermediate option between clear and vague choices (and there is evidence indicating that the double sampling chance setup is seen as operationally indistinguishable from the vague urn). We then suggest how the iterated chance setup can be used in order to study decisions from experience in the case of uncertainty. (shrink)
Daniel Ellsberg presented in Ellsberg various examples questioning the thesis that decision making under uncertainty can be reduced to decision making under risk. These examples constitute one of the main challenges to the received view on the foundations of decision theory offered by Leonard Savage in Savage. Craig Fox and Amos Tversky have, nevertheless, offered an indirect defense of Savage. They provided in Fox and Tversky an explanation of Ellsberg’s two-color problem in terms of a psychological effect: ambiguity aversion. The (...) ‘comparative ignorance’ hypothesis articulates how this effect works and explains why it is important to an understanding of the typical pattern of responses associated with Ellsberg’s two-color problem. In the first part of this article we challenge Fox and Tversky’s explanation. We present first an experiment that extends Ellsberg’s two-color problem where certain predictions of the comparative ignorance hypothesis are not confirmed. In addition the hypothesis seems unable to explain how the subjects resolve trade-offs between security and expected pay-off when vagueness is present. Ellsberg offered an explanation of the typical behavior elicited by his examples in terms of these trade-offs and in section three we offer a model of Ellsberg’s trade-offs. The model takes seriously the role of imprecise probabilities in explaining Ellsberg’s phenomenon. The so-called three-color problem was also considered in Fox and Tversky. We argue that Fox and Tversky’s analysis of this case breaks a symmetry with their analysis of the two-color problem. We propose a unified treatment of both problems and we present a experiment that confirms our hypothesis. (shrink)
One of the reasons for adopting hyperbolic discounting is to explain preference reversals. Another is that this value structure suggests an elegant theory of the will. I examine the capacity of the theory to solve Newcomb's problem. In addition, I compare Ainslie's account with other procedural theories of choice that seem at least equally capable of accommodating reversals of preference.
Carlos Alchourrón, Peter Gärdenfors and David Makinson published in 1985 a seminal article on belief change in the Journal of Symbolic Logic. Researchers from various disciplines, from computer science to mathematical economics to philosophical logic, have continued the work first presented in this seminal paper during the last two decades. This paper explores some salient foundational trends that interpret the act of changing view as a decision. We will argue that some of these foundational trends are already present, although only (...) tacitly, in the original article by the AGM trio. Other accounts decidedly depart from the view of contraction and revision presented in this seminal paper. I shall survey various types of theories that progressively depart form the axiomatic treatment defended by AGM. First, I consider theories where rational agents are considered as maximizers as opposed to optimizers ). Second, I consider which feasible set to use in contraction understood as a cognitive decision. This leads to rethink the very notion of what minimal change in contraction is. I shall conclude with some philosophical reflections concerning the sort of epistemological voluntarism that is tacit in seeing change in view as a rational choice. Carlos Alchourrón, Peter Gärdenfors y David Makinson publicaron en 1985 un artículo seminal sobre cambio de creencias en Journal of Symbolic Logic. Investigadores de varias disciplinas, desde la ciencia de la computación hasta la economía matemática y la lógica filosófica, han continuado en las dos últimas décadas esta línea de investigación. Este trabajo explora algunos aspectos fundacionalmente salientes que interpretan el acto de cambio de vista como una decisión. Argumentaremos que algunos de esos aspectos fundacionales ya estaban presentes, aunque solo tácitamente, en el artículo original del trío AGM. Otros abordajes parten decididamente de la contracción y revisión tal como fueran presentadas en el trabajo seminal. Inspeccionaré varios tipos de teorías que progresivamente parten del tratamiento axiomático defendido por AGM. Primero, considero teorías donde los agentes racionales aparecen como maximizadres opuestos a los optimiadores ). Segundo, me pregunto cuál conjunto derrotable debe usarse en una contracción entendida como una decisión cognitiva, lo cual lleva a repensar la importante cuestión de en qué consiste la noción de cambio mínimo en la contracción. Concluiré con algunas reflexiones filosóficas acerca de la suerte de voluntarismo epistemológico que está tácito en la concepción del cambio como una opción racional. (shrink)
Normative accounts in terms of similarity can be deployed in order to provide semantics for systems of context-free default rules and other sophisticated conditionals. In contrast, procedural accounts of decision in terms of similarity (Rubinstein 1997) are hard to reconcile with the normative rules of rationality used in decision-making, even when suitably weakened.
The "Ellsberg phenomenon" has played a significant role in research on imprecise probabilities. Fox and Tversky [5] have attempted to explain this phenomenon in terms of their "comparative ignorance" hypothesis. We challenge that explanation and present empirical work suggesting an explanation that is much closer to Ellsberg's own diagnosis.
In a series of recent articles Angelika Kratzer has argued that the standard account of modality along Kripkean lines is inadequate in order to represent context-dependent modals. In particular she argues that the standard account is unable to deliver a non-trivial account of modality capable of overcoming inconsistencies of the underlying conversational background.