We propose a modal logic based on three operators, representing intial beliefs, information and revised beliefs. Three simple axioms are used to provide a sound and complete axiomatization of the qualitative part of Bayes’ rule. Some theorems of this logic are derived concerning the interaction between current beliefs and future beliefs. Information flows and iterated revision are also discussed.
We consider strategic-form games with ordinal payoffs and provide a syntactic analysis of common belief/knowledge of rationality, which we define axiomatically. Two axioms are considered. The first says that a player is irrational if she chooses a particular strategy while believing that another strategy is better. We show that common belief of this weak notion of rationality characterizes the iterated deletion of pure strategies that are strictly dominated by pure strategies. The second axiom says that a player is irrational if (...) she chooses a particular strategy while believing that a different strategy is at least as good and she considers it possible that this alternative strategy is actually better than the chosen one. We show that common knowledge of this stronger notion of rationality characterizes the restriction to pure strategies of the iterated deletion procedure introduced by Stalnaker (1994). Frame characterization results are also provided. (shrink)
For the past 20 years or so the literature on noncooperative games has been centered on the search for an equilibrium concept that expresses the notion of rational behavior in interactive situations. A basic tenet in this literature is that if a “rational solution” exists, it must be a Nash equilibrium. The consensus view, however, is that not all Nash equilibria can be accepted as rational solutions. Consider, for example, the game of Figure 1.
The theory of belief revision deals with (rational) changes in beliefs in response to new information. In the literature a distinction has been drawn between belief revision and belief update (see ). The former deals with situations where the objective facts describing the world do not change (so that only the beliefs of the agent change over time), while the letter allows for situations where both the facts and the doxastic state of the agent change over time. We focus on (...) belief revision and propose a temporal framework that allows for iterated revision. We model the notion of “minimal” or “conservative” belief revision by considering logics of increasing strength. We move from one logic to the next by adding one or more axioms and show that the corresponding logic captures more stringent notions of minimal belief revision. The strongest logic that we propose provides a full axiomatization of the well-known AGM theory of belief revision. (shrink)
An information completion of an extensive game is obtained by extending the information partition of every player from the set of her decision nodes to the set of all nodes. The extended partition satisfies Memory of Past Knowledge (MPK) if at any node a player remembers what she knew at earlier nodes. It is shown that MPK can be satisfied in a game if and only if the game is von Neumann (vN) and satisfies memory at decision nodes (the restriction (...) of MPK to a player's own decision nodes). A game is vN if any two decision nodes that belong to the same information set of a player have the same number of predecessors. By providing an axiom for MPK we also obtain a syntactic characterization of the said class of vN games. (shrink)
In an earlier paper [Rational choice and AGM belief revision, _Artificial Intelligence_, 2009] a correspondence was established between the set-theoretic structures of revealed-preference theory (developed in economics) and the syntactic belief revision functions of the AGM theory (developed in philosophy and computer science). In this paper we extend the re-interpretation of those structures in terms of one-shot belief revision by relating them to the trichotomous attitude towards information studied in Garapa (Rev Symb Logic, 1–21, 2020) where information may be either (...) (1) fully accepted or (2) rejected or (3) taken seriously but not fully accepted. We begin by introducing the syntactic notion of _filtered belief revision_ and providing a characterization of it in terms of a mixture of both AGM revision and contraction. We then establish a correspondence between the proposed notion of filtered belief revision and the above-mentioned set-theoretic structures, interpreted as semantic _partial_ belief revision structures. We also provide an interpretation of the trichotomous attitude towards information in terms of the degree of implausibility of the information. (shrink)
Adapting a definition introduced by Milgrom (1981) we say that a signal about the environment is good news relative to some initial beliefs if the posterior beliefs dominate the initial beliefs in the sense of first-order stochastic dominance (the assumption being that higher values of the parameter representing the environment mean better environments). We give an example where good news leads to the adoption of a more pessimistic course of action (we say that action a, reveals greater pessimism than action (...) aâ if it gives higher payoff in bad environments and lower payoff in good environments). We then give sufficient conditions for a signal not to induce a more pessimistic choice of action. (shrink)
We study belief change in the branching-time structures introduced in Bonanno (Artif Intell 171:144–160, 2007 ). First, we identify a property of branching-time frames that is equivalent (when the set of states is finite) to AGM-consistency, which is defined as follows. A frame is AGM-consistent if the partial belief revision function associated with an arbitrary state-instant pair and an arbitrary model based on that frame can be extended to a full belief revision function that satisfies the AGM postulates. Second, (...) we provide a set of modal axioms that characterize the class of AGM-consistent frames within the modal logic introduced in Bonanno (Artif Intell 171:144–160, 2007 ). Third, we introduce a generalization of AGM belief revision functions that allows a clear statement of principles of iterated belief revision and discuss iterated revision both semantically and syntactically. (shrink)
The principle of belief persistence, or conservativity principle, states that ’\Nhen changing beliefs in response to new evidence, you should continue to believe as many of the old beliefs as possible' (Harman, 1986, p. 46). In particular, this means that if an individual gets new information, she has to accommodate it in her new belief set (the set of propositions she believes), and, if the new information is not inconsistent with the old belief set, then (1) the individual has to (...) maintain all the beliefs she previously had and (2) the change should be minimal in the sense that every proposition in the new belief set must be deducible from the union of the old belief set and the new information (see, e.g., Gardenfors, 1988; Stalnaker, 1984). We focus on this minimal notion of belief persistence and characterize it both semantically and syntactically. A ’possible world' semantic formalization of the principle easily comes to mind. The set of all the propositions that the individual believes corresponds to the set of states of the world that she considers possible and is a subset of the set of states that are not ruled out by the individual's information (or knowledge). It is required that, if the individual considers a state possible and her new information does not exclude this state, then she continue to consider it possible. Furthermore, if the individual regards a particular state as impossible, then she should continue to regard it as impossible unless her new information excludes all the states that she previously regarded as possible. This is closely related to the.. (shrink)
We establish a correspondence between the rationalizability of choice studied in the revealed preference literature and the notion of minimal belief revision captured by the AGM postulates. A choice frame consists of a set of alternatives , a collection E of subsets of (representing possible choice sets) and a function f : E ! 2 (representing choices made). A choice frame is rationalizable if there exists a total pre-order R on..
The notion of perfect recall in extensive games was introduced by Kuhn (1953), who interpreted it as "equivalent to the assertion that each player is allowed by the rules of the game to remember everything he knew at previous moves and all of his choices at those moves''. We provide a characterization and axiomatization of perfect recall based on two notions of memory: (1) memory of past knowledge and (2) memory of past actions.
The temporal updating of an agent’s beliefs in response to a flow of information is modeled in a simple modal logic that, for every date t, contains a normal belief operator B t and a non-normal information operator I t which is analogous to the ‘only knowing’ operator discussed in the computer science literature. Soundness and completeness of the logic are proved and the relationship between the proposed logic, the AGM theory of belief revision and the notion of plausibility is (...) discussed. (shrink)
Two views of game theory are discussed: (1) game theory as a description of the behavior of rational individuals who recognize each other’s rationality and reasoning abilities, and (2) game theory as an internally consistent recommendation to individuals on how to act in interactive situations. It is shown that the same mathematical tool, namely modal logic, can be used to explicitly model both views.
Restricting attention to the class of extensive games defined by von Neumann and Morgenstern with the added assumption of perfect recall, we specify the information of each player at each node of the game-tree in a way which is coherent with the original information structure of the extensive form. We show that this approach provides a framework for a formal and rigorous treatment of questions of knowledge and common knowledge at every node of the tree. We construct a particular information (...) partition for each player and show that it captures the notion of maximum information in the sense that it is the finest within the class of information partitions that satisfy four natural properties. Using this notion of “maximum information” we are able to provide an alternative characterization of the meet of the information partitions. (shrink)
This paper suggests a way of formalizing the amount of information that can be conveyed to each player along every possible play of an extensive game. The information given to each player i when the play of the game reaches node x is expressed as a subset of the set of terminal nodes. Two definitions are put forward, one expressing the minimum amount of information and the other the maximum amount of information that can be conveyed without violating the constraint (...) represented by the information sets. Our definitions provide intuitive characterizations of such notions as perfect recall, perfect information and simultanetty. (shrink)
The logical foundations of game-theoretic solution concepts have so far been explored within the con¯nes of epistemic logic. In this paper we turn to a di®erent branch of modal logic, namely temporal logic, and propose to view the solution of a game as a complete prediction about future play. The branching time framework is extended by adding agents and by de¯ning the notion of prediction. A syntactic characterization of backward induction in terms of the property of internal consistency of prediction (...) is given. (shrink)
The contributions to the Special Issue on Multiple Belief Change, Iterated Belief Change and Preference Aggregation are divided into three parts. Four contributions are grouped under the heading "multiple belief change" (Part I, with authors M. Falappa, E. Fermé, G. Kern-Isberner, P. Peppas, M. Reis, and G. Simari), five contributions under the heading "iterated belief change" (Part II, with authors G. Bonanno, S.O. Hansson, A. Nayak, M. Orgun, R. Ramachandran, H. Rott, and E. Weydert). These papers do not only (...) pick up the particular questions raised, but also extend and modify the framework of Alchourrón, Gärdenfors and Makinson. Part III deals with preference aggregation and consists of one contribution (by F. Herzberg and D. Eckert). (shrink)
The logic of common belief does not always reflect that of individual beliefs. In particular, even when the individual belief operators satisfy the KD45 logic, the common belief operator may fail to satisfy axiom 5. That is, it can happen that neither is A commonly believed nor is it common belief that A is not commonly believed. We identify the intersubjective restrictions on individual beliefs that are incorporated in axiom 5 for common belief.
The Common Prior Assumption (CPA) plays an important role in game theory and the economics of information. It is the basic assumption behind decision-theoretic justifications of equilibrium reasoning in games (Aumann, 1987, Aumann and Brandenburger, 1995) and no-trade results with asymmetric information (Milgrom and Stokey, 1982). Recently several authors (Dekel and Gul, 1997, Gul, 1996, Lipman, 1995) have questioned whether the CPA is meaningful in situations of incomplete information, where there is no ex ante stage and where the primitives of (...) the model are the individuals' beliefs about the external world (their first-order beliefs), their beliefs about the other individuals' beliefs (second-order beliefs), etc., i.e. their hierarchies of beliefs. In this context, the CPA is a mathematical property whose conceptual content is not clear. The main results of this paper (Theorems 1 and 2) provide a characterization of Harsanyi consistency in terms of properties of the belief hierarchies that are entirely unrelated to the idea of an ex ante stage. (shrink)
There is an ongoing debate in the philosophical literature whether the conditionals that are central to deliberation are subjunctive or indicative conditionals and, if the latter, what semantics of the indicative conditional is compatible with the role that conditionals play in deliberation. We propose a possible-world semantics where conditionals of the form “if I take action _a_ the outcome will be _x_” are interpreted as material conditionals. The proposed framework is illustrated with familiar examples and both qualitative and probabilistic beliefs (...) are considered. Issues such as common-cause cases and ‘Egan-style’ cases are discussed. (shrink)
We consider a basic logic with two primitive uni-modal operators: one for certainty and the other for plausibility. The former is assumed to be a normal operator, while the latter is merely a classical operator. We then define belief, interpreted as “maximally plausible possibility”, in terms of these two notions: the agent believes \ if she cannot rule out \ ), she judges \ to be plausible and she does not judge \ to be plausible. We consider four interaction properties (...) between certainty and plausibility and study how these properties translate into properties of belief. We then prove that all the logics considered are minimal logics for the highlighted theorems. We also consider a number of possible interpretations of plausibility, identify the corresponding logics and show that some notions considered in the literature are special cases of our framework. (shrink)
Counterexamples to two results by Stalnaker (Theory and Decision, 1994) are given and a corrected version of one of the two results is proved. Stalnaker's proposed results are: (1) if at the true state of an epistemic model of a perfect information game there is common belief in the rationality of every player and common belief that no player has false beliefs (he calls this joint condition âstrong rationalizabilityâ), then the true (or actual) strategy profile is path equivalent to a (...) Nash equilibrium; (2) in a normal-form game a strategy profile is strongly rationalizable if and only if it belongs to C8 , the set of profiles that survive the iterative deletion of inferior profiles. (shrink)
This is a two-volume set that provides an introduction to non-cooperative Game Theory. Volume 1 covers the basic concepts, while Volume 2 is devoted to advanced topics. The book is richly illustrated with approximately 400 figures. It is suitable for both self-study and as the basis for an undergraduate course in game theory as well as a first-year graduate-level class. It is written to be accessible to anybody with high-school level knowledge of mathematics. At the end of each chapter there (...) is a collection of exercises accompanied by detailed answers. The book contains approximately 180 exercises. (shrink)
ABSTRACT The past fifteen years or so have witnessed considerable progress in our understanding of how the human brain works. One of the objectives of the fast-growing field of neuroscience is to deepen our knowledge of how the brain perceives and interacts with the external world. Advances in this direction have been made possible by progress in brain imaging techniques and by clinical data obtained from patients with localized brain lesions. A relatively new field within neuroscience is neuroeconomics, which focuses (...) on individual decision making and aims to systematically classify and map the brain activity that correlates with decision-making that pertains to economic choices. Neuroeconomic studies rely heavily on functional magnetic resonance imaging (fMRI), which measures the haemodynamic response (that is, changes in the blood flow) related to neural activity in the brain. (shrink)
This volume is a collects papers originally presented at the 7th Conference on Logic and the Foundations of Game and Decision Theory (LOFT), held at the University of Liverpool in July 2006. LOFT is a key venue for presenting research at the intersection of logic, economics, and computer science, and this collection gives a lively and wide-ranging view of an exciting and rapidly growing area.
This text provides an introduction to the topic of rational decision making as well as a brief overview of the most common biases in judgment and decision making. "Decision Making" is relatively short (300 pages) and richly illustrated with approximately 100 figures. It is suitable for both self-study and as the basis for an upper-division undergraduate course in judgment and decision making. The book is written to be accessible to anybody with minimum knowledge of mathematics (high-school level algebra and some (...) elementary notions of set theory and probability, which are reviewed in the book). At the end of each chapter there is a collection of exercises that are grouped according to that chapter’s sections. Complete and detailed answers for each exercise are given in the last section of each chapter. The book contains a total of 121 fully solved exercises. (shrink)
Making a prediction is essentially expressing a belief about the future. It is therefore natural to interpret later predictions as revisions of earlier ones and to investigate the notion of belief revision in this context. We study, both semantically and syntactically, the following principle of minimum revision of prediction: “as long as there are no surprises, that is, as long as what actually occurs had been predicted to occur, then everything which was predicted in the past, if still possible, should (...) continue to be predicted, and no new predictions should be added.”. (shrink)
Since Lewis’s (1969) and Aumann’s (1976) pioneering contributions, the concepts of common knowledge and common belief have been discussed extensively in the literature, both syntactically and semantically1. At the individual level the difference between knowledge and belief is usually identified with the presence or absence of the Truth Axiom ( iA → A), which is interpreted as ”if individual i believes that A, then A is true”. In such a case the individual is often said to know that A (thus (...) it is possible for an individual to believe a false proposition but she cannot know a false proposition). Going to the interpersonal level, the literature then distinguishes between common knowledge and common belief on the basis of whether or not the Truth Axiom is postulated at the individual level. However, while at the individual level the Truth Axiom captures merely a relationship between the individuals’ beliefs and the external world, at the interpersonal level it has very strong implications. For example, the following is a consequence of the Truth Axiom: i jA → iA, that is, if individual i believes that individual j believes that A, then individual i herself believes that A. Thus, in contrast to other axioms, the Truth Axiom does not merely reflect individual agents’ “logic of belief”. (The reason why the Truth Axiom is much stronger in an interpersonal context than appears at first glance is that it amounts to assuming that agreement of any individual’s belief with the truth is common knowledge). Given its logical force, it is not surprising to find that it has strong implications for the logic of common knowledge. In particular, if each individual’s beliefs satisfy the strongest logic of knowledge (namely S5 or KT5), the associated common knowledge operator satisfies this logic too. Such is not the case for belief: bereft of the Truth Axiom, even the strongest logic for individual belief (KD45) is insufficient to ensure the satisfaction of the “Negative Introspection” axiom for common belief: ¬ ∗A → ∗¬ ∗A (where ∗ denotes the common belief operator).. (shrink)
The paradigm for modelling decision-making under uncertainty has undoubtedly been the theory of Expected Utility, which was first developed by von Neumann and Morgenstern (1944) and later extended by Savage (1954) to the case of subjective uncertainty. The inadequacy of the theory of Subjective Expected Utility (SEU) as a descriptive theory was soon pointed out in experiments, most famously by Allais (1953) and Ellsberg (1961). The observed departures from SEU noticed by Allais and Ellsberg became known as “paradoxes”. The Ellsberg (...) paradox gave rise, several years later, to a new literature on decision-making under ambiguity. The theoretical side of this literature was pioneered by Schmeidler (1989). This literature views the departures from SEU in situations similar to those discussed by Ellsberg as rational responses to ambiguity. The rationality is “recovered” by relaxing Savage's Sure-Thing principle and adding an ambiguity-aversion postulate. Thus the ambiguity-aversion literature takes a normative point of view and does consider Ellsberg-type choices as behavioural “anomalies”. (shrink)
When we make a prediction we select, among the conceivable future descriptions of the world, those that appear to us to be most plausible. We capture this by means of two binary relations, ≺c and ≺p: if t1 and t2 are points in time, we interpret t1 ≺ct2 as sayingthat t2 is in the conceivable future of t1, while t1 ≺pt2 is interpreted to mean that t2 isin the predicted future of t1. Within a branching-time framework we propose the following (...) notion of “consistency of prediction”. Suppose that at t1 some future moment t2 is predicted to occur, then every moment t on the unique path from t1 to t2 should also be predicted at t1 and the prediction of t2 should continue to hold at every such t. A sound and complete axiomatization is provided. (shrink)
Within the context of extensive-form (or dynamic) games, we use choice frames to represent the initial beliefs of a player as well as her disposition to change those beliefs when she learns that an information set of hers has been reached. As shown in , in order for the revision operation to be consistent with the AGM postulates , the player’s choice frame must be rationalizable in terms of a total pre-order on the set of histories. We consider four properties (...) of choice frames and show that, together with the hypothesis of a common prior, are necessary and sufficient for the existence of a plausibility order that rationalizes the epistemic state (that is, initial beliefs and disposition to revise those beliefs) of all the players. The plausibility order satisfies the properties introduced in  as part of a new definition of perfect Bayesian equilibrium for dynamic games. Thus the present paper provides epistemic foundations for that solution concept. (shrink)
Two questions are examined within a model of vertical differentiation. The first is whether cost-reducing innovations are more likely to be observed in regimes of more intense or less intense competition. Following Delbono and Denicolo (1990) and Bester and Petrakis (1993) we compare two identical industries that differ only in the regime of competition: Bertrand versus Cournot. Since Cournot competition leads to lower output and higher prices, it can be thought of as a regime of less intense competition. We find (...) that the increase in profits associated with any given cost reduction is higher in the case of Cournot competition than in the case of Bertrand competition. Thus there are cost-reducing innovations that would be pursued under Cournot competition but not under Bertrand competition. (shrink)
The notion of Nash equilibrium in static oligopoly games is based on the assumption that each firm knows its entire demand curve (and, therefore, its entire profit function). It is much more likely, however, that firms only have some idea of the outcome of small price variations within some relatively small interval of prices. This is because firms can only learn their demand functions through price experiments and if they are risk-averse and/or have a low discount factor, they will be (...) unwilling to engage in extensive price experiments involving large variations in price. We can therefore expect firms to experiment through small price variations and stop when they reach a price such that no small deviation.. (shrink)
Two notions of memory are studied both syntactically and semantically: memory of past beliefs and memory of past actions. The analysis is carried out in a basic temporal logic framework enriched with beliefs and actions.
In her book Rationality and coordination (Cambridge University Press, 1994) Cristina Bicchieri brings together (and adds to) her own contributions to game theory and the philosophy of economics published in various journals in the period 1987-1992. The book, however, is not a collection of separate articles but rather a homogeneous unit organized around some central themes in the foundations of non-cooperative game theory. Bicchieri’s exposition is admirably clear and well organized. Somebody with a good knowledge of game theory would probably (...) benefit mainly from reading the second part of Chapter 3 (from Section 3.6 onward) and Chapter 4. On the other hand, those who have had little exposure to game theory, would certainly benefit from reading the entire book. I shall begin with an overview of the content of the book and then offer some critical comments on what I consider to be the most important part of it. (shrink)