In Richard Bradley’s book, Decision Theory with a Human Face, we have selected two themes for discussion. The first is the Bolker-Jeffrey theory of decision, which the book uses throughout as a tool to reorganize the whole field of decision theory, and in particular to evaluate the extent to which expectedutility theories may be normatively too demanding. The second theme is the redefinition strategy that can be used to defend EU theories against the Allais (...) and Ellsberg paradoxes, a strategy that the book by and large endorses, and even develops in an original way concerning the Ellsberg paradox. We argue that the BJ theory is too specific to fulfil Bradley’s foundational project and that the redefinition strategy fails in both the Allais and Ellsberg cases. Although we share Bradley’s conclusion that EU theories do not state universal rationality requirements, we reach it not by a comparison with BJ theory, but by a comparison with the non-EU theories that the paradoxes have heuristically suggested. (shrink)
There are decision problems where the preferences that seem rational to many people cannot be accommodated within orthodox decision theory in the natural way. In response, a number of alternatives to the orthodoxy have been proposed. In this paper, I offer an argument against those alternatives and in favour of the orthodoxy. I focus on preferences that seem to encode sensitivity to risk. And I focus on the alternative to the orthodoxy proposed by Lara Buchak’s risk-weighted expected (...) class='Hi'>utilitytheory. I will show that the orthodoxy can be made to accommodate all of the preferences that Buchak’s theory can accommodate. (shrink)
This note is a generalization and improved interpretation of the main result of Karni and Schmeidler. A decision-maker is supposed to possess a preference relation on acts and another preference relation on state-prize lotteries, both of which are assumed to satisfy the von Neumann–Morgenstern axioms. In addition, the two preference relations restricted to a state of nature are assumed to agree. We show that these axioms are necessary and sufficient for the existence of subjective expectedutility over acts (...) with state-dependent utility functions and a subjective probability measure. This subjective probability measure is unique when conditioned on the set of states of nature in which not all the prizes are equally desirable. (shrink)
The paper summarizes expectedutilitytheory, both in its original von Neumann-Morgenstern version and its later developments, and discusses the normative claims to rationality made by this theory.
In this article, Savage’s theory of decision-making under uncertainty is extended from a classical environment into a non-classical one. The Boolean lattice of events is replaced by an arbitrary ortho-complemented poset. We formulate the corresponding axioms and provide representation theorems for qualitative measures and expectedutility. Then, we discuss the issue of beliefs updating and investigate a transition probability model. An application to a simple game context is proposed.
ABSTRACTA common objection to the precautionary principle is that it is irrational. I argue that this objection goes beyond the often-discussed claim that the principle is incoherent. Instead, I argue, expectedutilitytheory is the source of several more sophisticated irrationality charges against the precautionary principle. I then defend the principle from these objections by arguing that the relevant features of the precautionary principle are part of plausible normative theories, and that the precautionary principle does not diverge (...) more from ideal expectedutility maximization than non-ideal expectedutility maximizing procedures, and may do better in real-world choices. (shrink)
Expected-utilitytheory has been a popular and influential theory in philosophy, law, and the social sciences. While its original developers, von Neumann and Morgenstern, presented it as a purely predictive theory useful to the practitioners of economic science, many subsequent theorists, particularly those outside of economics, have come to endorse EU theory as providing us with a representation of reason. But precisely in what sense does EU theory portray reason? And does it do (...) so successfully? There are two strikingly different answers to these questions in the literature. On the one hand, there is the view of people such as David Gauthier that EU theory is an implementation of the idea that reason's only role is instrumental. On the other hand, there is the view suggested by Leonard Savage that the theory is a “formal” and noninstrumental characterization of our reasoning process. (shrink)
A mixture preorder is a preorder on a mixture space (such as a convex set) that is compatible with the mixing operation. In decision theoretic terms, it satisfies the central expectedutility axiom of strong independence. We consider when a mixture preorder has a multi-representation that consists of real-valued, mixture-preserving functions. If it does, it must satisfy the mixture continuity axiom of Herstein and Milnor (1953). Mixture continuity is sufficient for a mixture-preserving multi-representation when the dimension of the (...) mixture space is countable, but not when it is uncountable. Our strongest positive result is that mixture continuity is sufficient in conjunction with a novel axiom we call countable domination, which constrains the order complexity of the mixture preorder in terms of its Archimedean structure. We also consider what happens when the mixture space is given its natural weak topology. Continuity (having closed upper and lower sets) and closedness (having a closed graph) are stronger than mixture continuity. We show that continuity is necessary but not sufficient for a mixture preorder to have a mixture-preserving multi-representation. Closedness is also necessary; we leave it as an open question whether it is sufficient. We end with results concerning the existence of mixture-preserving multi-representations that consist entirely of strictly increasing functions, and a uniqueness result. (shrink)
This monographic chapter explains how expectedutility (EU) theory arose in von Neumann and Morgenstern, how it was called into question by Allais and others, and how it gave way to non-EU theories, at least among the specialized quarters of decion theory. I organize the narrative around the idea that the successive theoretical moves amounted to resolving Duhem-Quine underdetermination problems, so they can be assessed in terms of the philosophical recommendations made to overcome these problems. I (...) actually follow Duhem's recommendation, which was essentially to rely on the passing of time to make many experiments and arguments available, and evebntually strike a balance between competing theories on the basis of this improved knowledge. Although Duhem's solution seems disappointingly vague, relying as it does on "bon sens" to bring an end to the temporal process, I do not think there is any better one in the philosophical literature, and I apply it here for what it is worth. In this perspective, EU theorists were justified in resisting the first attempts at refuting their theory, including Allais's in the 50s, but they would have lacked "bon sens" in not acknowledging their defeat in the 80s, after the long process of pros and cons had sufficiently matured. This primary Duhemian theme is actually combined with a secondary theme - normativity. I suggest that EU theory was normative at its very beginning and has remained so all along, and I express dissatisfaction with the orthodox view that it could be treated as a straightforward descriptive theory for purposes of prediction and scientific test. This view is usually accompanied with a faulty historical reconstruction, according to which EU theorists initially formulated the VNM axioms descriptively and retreated to a normative construal once they fell threatened by empirical refutation. From my historical study, things did not evolve in this way, and the theory was both proposed and rebutted on the basis of normative arguments already in the 1950s. The ensuing, major problem was to make choice experiments compatible with this inherently normative feature of theory. Compability was obtained in some experiments, but implicitly and somewhat confusingly, for instance by excluding overtly incoherent subjects or by creating strong incentives for the subjects to reflect on the questions and provide answers they would be able to defend. I also claim that Allais had an intuition of how to combine testability and normativity, unlike most later experimenters, and that it would have been more fruitful to work from his intuition than to make choice experiments of the naively empirical style that flourished after him. In sum, it can be said that the underdetermination process accompanying EUT was resolved in a Duhemian way, but this was not without major inefficiencies. To embody explicit rationality considerations into experimental schemes right from the beginning would have limited the scope of empirical research, avoided wasting resources to get only minor findings, and speeded up the Duhemian process of groping towards a choice among competing theories. (shrink)
We contrast three decision rules that extend ExpectedUtility to contexts where a convex set of probabilities is used to depict uncertainty: Γ-Maximin, Maximality, and E-admissibility. The rules extend ExpectedUtilitytheory as they require that an option is inadmissible if there is another that carries greater expectedutility for each probability in a (closed) convex set. If the convex set is a singleton, then each rule agrees with maximizing expectedutility. (...) We show that, even when the option set is convex, this pairwise comparison between acts may fail to identify those acts which are Bayes for some probability in a convex set that is not closed. This limitation affects two of the decision rules but not E-admissibility, which is not a pairwise decision rule. E-admissibility can be used to distinguish between two convex sets of probabilities that intersect all the same supporting hyperplanes. (shrink)
Independence is the condition that, if X is preferred to Y, then a lottery between X and Z is preferred to a lottery between Y and Z given the same probability of Z. Is it rationally required that one’s preferences conform to Independence? The main objection to this requirement is that it would rule out the alleged rationality of Allais and Ellsberg Preferences. In this paper, I put forward a sequential dominance argument with fairly weak assumptions for a variant of (...) Independence (called Independence for Constant Prospects), which shows that Allais and Ellsberg Preferences are irrational. Hence this influential objection (that is, the alleged rationality of Allais and Ellsberg Preferences) can be rebutted. I also put forward a number of sequential dominance arguments that various versions of Independence are requirements of rationality. One of these arguments is based on very minimal assumptions, but the arguments for the versions of Independence which are strong enough to serve in the standard axiomatization of ExpectedUtilityTheory need notably stronger assumptions. (shrink)
This article analyses how normative decision theory is understood by economists. The paradigmatic example of normative decision theory, discussed in the article, is the expectedutilitytheory. It...
The discovered preference hypothesis appears to insulate expectedutilitytheory (EU) from disconfirming experimental evidence. It asserts that individuals have coherent underlying preferences, which experiments may not reveal unless subjects have adequate opportunities and incentives to discover which actions best satisfy their preferences. We identify the confounding effects to be expected in experiments, were that hypothesis true, and consider how they might be controlled for. We argue for a design in which each subject faces just one (...) distinct choice task for real. We review the results of some tests of EU which have used this design. These tests reveal the same violations of the independence axiom as other studies have found. We conclude that the discovered preference hypothesis does not justify scepticism about the reality of these effects. (shrink)
Expectedutilitytheory does not directly deal with the utility of chance. It has been suggested in the literature (Samuelson, 1952, Markowitz, 1959) that this can be remedied by an approach which explicitly models the emotional consequences which give rise to the utility of chance. We refer to this as the elaborated outcomes approach. It is argued that the elaborated outcomes approach destroys the possibility of deriving a representation theorem based on the usual axioms of (...)expectedutilitytheory. This is shown with the help of an example due to Markowitz. It turns out that the space of conceivable lotteries over elaborated outcomes is too narrow to permit the application of the axioms. Moreover it is shown that a representation theorem does not hold for the example. (shrink)
This paper presents a personal view of the interaction between the analysis of choice under uncertainty and the analysis of production under uncertainty. Interest in the foundations of the theory of choice under uncertainty was stimulated by applications of expectedutilitytheory such as the Sandmo model of production under uncertainty. This interest led to the development of generalized models including rank-dependent expectedutilitytheory. In turn, the development of generalized expected (...) class='Hi'>utility models raised the question of whether such models could be used in the analysis of applied problems such as those involving production under uncertainty. Finally, the revival of the state-contingent approach led to the recognition of a fundamental duality between choice problems and production problems. (shrink)
The paper re-expresses arguments against the normative validity of expectedutilitytheory in Robin Pope (1983, 1991a, 1991b, 1985, 1995, 2000, 2001, 2005, 2006, 2007). These concern the neglect of the evolving stages of knowledge ahead (stages of what the future will bring). Such evolution is fundamental to an experience of risk, yet not consistently incorporated even in axiomatised temporal versions of expectedutility. Its neglect entails a disregard of emotional and financial effects on well-being (...) before a particular risk is resolved. These arguments are complemented with an analysis of the essential uniqueness property in the context of temporal and atemporal expectedutilitytheory and a proof of the absence of a limit property natural in an axiomatised approach to temporal expectedutilitytheory. Problems of the time structure of risk are investigated in a simple temporal framework restricted to a subclass of temporal lotteries in the sense of David Kreps and Evan Porteus (1978). This subclass is narrow but wide enough to discuss basic issues. It will be shown that there are serious objections against the modification of expectedutilitytheory axiomatised by Kreps and Porteus (1978, 1979). By contrast the umbrella theory proffered by Pope that she has now termed SKAT, the Stages of Knowledge Ahead Theory, offers an epistemically consistent framework within which to construct particular models to deal with particular decision situations. A model by Caplin and Leahy (2001) will also be discussed and contrasted with the modelling within SKAT (Pope, Leopold and Leitner 2007). (shrink)
Interest in the foundations of the theory of choice under uncertainty was stimulated by applications of expectedutilitytheory such as the Sandmo model of production under uncertainty. The development of generalized expectedutility models raised the question of whether such models could be used in the analysis of applied problems such as those involving production under uncertainty. Finally, the revival of the state-contingent approach led to the recognition of a fundamental duality between choice (...) problems and production problems. (shrink)
We use the multiple price list method and a recursive expectedutilitytheory of smooth ambiguity to separate out attitude towards risk from that towards ambiguity. Based on this separation, we investigate if there are differences in agent behaviour under uncertainty over gain amounts vis-a-vis uncertainty over loss amounts. On an aggregate level, we find that (i) subjects are risk averse over gains and risk seeking over losses, displaying a “reflection effect” and (ii) they are ambiguity neutral (...) over gains and are mildly ambiguity seeking over losses. Further analysis shows that on an individual level, and with respect to both risky and ambiguous prospects, there is limited incidence of a reflection effect where subjects are risk/ambiguity averse (seeking) in gains and seeking (averse) in losses, though this incidence is higher for ambiguous prospects. A very high proportion of such cases of reflection exhibit risk (ambiguity) aversion in gains and risk (ambiguity) seeking in losses, with the reverse effect being significantly present in the case of risk but almost absent in case of ambiguity. Our results suggest that reflection across gains and losses is not a stable individual characteristic, but depends upon whether the form of uncertainty is precise or ambiguous, since we rarely find an individual who exhibits reflection in both risky and ambiguous prospects. We also find that correlations between attitudes towards risk and ambiguity were domain dependent. (shrink)
We give two social aggregation theorems under conditions of risk, one for constant population cases, the other an extension to variable populations. Intra and interpersonal welfare comparisons are encoded in a single ‘individual preorder’. The theorems give axioms that uniquely determine a social preorder in terms of this individual preorder. The social preorders described by these theorems have features that may be considered characteristic of Harsanyi-style utilitarianism, such as indifference to ex ante and ex post equality. However, the theorems are (...) also consistent with the rejection of all of the expectedutility axioms, completeness, continuity, and independence, at both the individual and social levels. In that sense, expectedutility is inessential to Harsanyi-style utilitarianism. In fact, the variable population theorem imposes only a mild constraint on the individual preorder, while the constant population theorem imposes no constraint at all. We then derive further results under the assumption of our basic axioms. First, the individual preorder satisfies the main expectedutility axiom of strong independence if and only if the social preorder has a vector-valued expected total utility representation, covering Harsanyi’s utilitarian theorem as a special case. Second, stronger utilitarian-friendly assumptions, like Pareto or strong separability, are essentially equivalent to strong independence. Third, if the individual preorder satisfies a ‘local expectedutility’ condition popular in non-expectedutilitytheory, then the social preorder has a ‘local expected total utility’ representation. Fourth, a wide range of non-expectedutility theories nevertheless lead to social preorders of outcomes that have been seen as canonically egalitarian, such as rank-dependent social preorders. Although our aggregation theorems are stated under conditions of risk, they are valid in more general frameworks for representing uncertainty or ambiguity. (shrink)
Behaviour norms are considered for decision trees which allow both objective probabilities and uncertain states of the world with unknown probabilities. Terminal nodes have consequences in a given domain. Behaviour is required to be consistent in subtrees. Consequentialist behaviour, by definition, reveals a consequence choice function independent of the structure of the decision tree. It implies that behaviour reveals a revealed preference ordering satisfying both the independence axiom and a novel form of sure-thing principle. Continuous consequentialist behaviour must be (...) class='Hi'>expectedutility maximizing. Other plausible assumptions then imply additive utilities, subjective probabilities, and Bayes' rule. (shrink)
Standard theories of expectedutility require that preferences are complete, and/or Archimedean. We present in this paper a theory of decision under uncertainty for both incomplete and non-Archimedean preferences. Without continuity assumptions, incomplete preferences on a lottery space reduce to an order-extension problem. It is well known that incomplete preferences can be extended to complete preferences in the full generality, but this result does not necessarily hold for incomplete preferences which satisfy the independence axiom, since it may (...) obviously happen that the extension does not satisfy the independence axiom. We show, for incomplete preferences on a mixture space, that an extension which satisfies the independence axiom exists. We find necessary and sufficient conditions for a preorder on a finite lottery space to be representable by a family of lexicographic von Neumann–Morgenstern ExpectedUtility functions. (shrink)
We consider the problem of extending a (complete) order over a set to its power set. The extension axioms we consider generate orderings over sets according to their expected utilities induced by some assignment of utilities over alternatives and probability distributions over sets. The model we propose gives a general and unified exposition of expectedutility consistent extensions whilst it allows to emphasize various subtleties, the effects of which seem to be underestimated – particularly in the literature (...) on strategy-proof social choice correspondences. (shrink)
Some early phase clinical studies of candidate HIV cure and remission interventions appear to have adverse medical risk–benefit ratios for participants. Why, then, do people participate? And is it ethically permissible to allow them to participate? Recent work in decision theory sheds light on both of these questions, by casting doubt on the idea that rational individuals prefer choices that maximise expectedutility, and therefore by casting doubt on the idea that researchers have an ethical obligation not (...) to enrol participants in studies with high risk–benefit ratios. This work supports the view that researchers should instead defer to the considered preferences of the participants themselves. This essay briefly explains this recent work, and then explores its application to these two questions in more detail. (shrink)
In this article we explore an argumentative pattern that provides a normative justification for expectedutility functions grounded on empirical evidence, showing how it worked in three different episodes of their development. The argument claims that we should prudentially maximize our expectedutility since this is the criterion effectively applied by those who are considered wisest in making risky choices (be it gamblers or businessmen). Yet, to justify the adoption of this rule, it should be proven (...) that this is empirically true: i.e. that a given function allows us to predict the choices of that particular class of agents. We show how expectedutility functions were introduced and contested in accordance with this pattern in the 18th century and how it recurred in the 1950s when Allais made his case against the neo-Bernoullians. (shrink)
In the present paper we study the framework of additive utilitytheory, obtaining new results derived from a concurrence of algebraic and topological techniques. Such techniques lean on the concept of a connected topological totally ordered semigroup. We achieve a general result concerning the existence of continuous and additive utility functions on completely preordered sets endowed with a binary operation ``+'', not necessarily being commutative or associative. In the final part of the paper we get some applications (...) to expectedutilitytheory, and a representation theorem for a class of complete preorders on a quite general family of real mixture spaces. (shrink)
Cerreia-Vioglio et al. :341–375, 2011) have proposed a very general axiomatisation of preferences in the presence of ambiguity, viz. Monotonic Bernoullian Archimedean preference orderings. This paper investigates the problem of Arrovian aggregation of such preferences—and proves dictatorial impossibility results for both finite and infinite populations. Applications for the special case of aggregating expected-utility preferences are given. A novel proof methodology for special aggregation problems, based on model theory, is employed.
According to epistemic utilitytheory, epistemic rationality is teleological: epistemic norms are instrumental norms that have the aim of acquiring accuracy. What’s definitive of these norms is that they can be expected to lead to the acquisition of accuracy when followed. While there’s much to be said in favor of this approach, it turns out that it faces a couple of worrisome extensional problems involving the future. The first problem involves credences about the future, and the second (...) problem involves future credences. Examining prominent solutions to a different extensional problem for this approach reinforces the severity of the two problems involving the future. Reflecting on these problems reveals the source: the teleological assumption that epistemic rationality aims at acquiring accuracy. (shrink)
In recent attempts at deriving morality from rationality expectedutilitytheory has played a major role. In the most prominent such attempt, Gauthier'sMorals by Agreement, a mode of maximizing utility calledconstrained maximization is defended. I want to show that constrained maximization or any similar proposal cannot be coherently supported by expectedutilitytheory. First, I point to an important implication of that theory. Second, I discuss the question of what the place of (...) constrained maximization in utilitytheory might be. Third, I argue that no matter how we answer this question, expectedutilitytheory cannot provide the reason why a moral disposition like constrained maximization is to be preferred to its rivals. (shrink)
An expectedutility model of individual choice is formulated which allows the decision maker to specify his available actions in the form of controls (partial contingency plans) and to simultaneously choose goals and controls in end-mean pairs. It is shown that the Savage expectedutility model, the Marschak- Radner team model, the Bayesian statistical decision model, and the standard optimal control model can be viewed as special cases of this goal-control expectedutility model.
This paper studies decisions under ambiguity when attention is paid to extreme outcomes. In a purely subjective framework, we propose an axiomatic characterization of affine capacities, which are Choquet capacities consisting in an affine transformation of a subjective probability. Our main axiom restricts the well-known Savage’s Sure-Thing Principle to a change in a common intermediate outcome. The representation result is then an affine combination of the expectedutility of the valued act and its maximal and minimal utilities.
This paper proposes a view uniformly extending expectedutility calculations to both individual and group choice contexts. Three related cases illustrate the problems inherent in applying expectedutility to group choices. However, these problems do not essentially depend upon the tact that more than one agent is involved. I devise a modified strategy allowing the application of expectedutility calculations to these otherwise problematic cases. One case, however, apparently leads to contradiction. But recognizing the (...) falsity of proposition (1) below allows the resolution of the contradiction, and also allows my modified strategy to resolve otherwise paradoxical cases of group choice such as the Prisoners' Dilemma: -/- (1) lf an agent x knows options A and B are both available, and x knows that were he to do A he would be better off (in every respect) than were he to do B, then doing A is more rational for x than doing B. (shrink)
Quantum cognition in decision making is a recent and rapidly growing field. In this paper, we develop an expectedutilitytheory in a context of non-classical uncertainty. We replace the classical state space with a Hilbert space which allows introducing the concept of quantum lottery. Within that framework, we formulate axioms on preferences over quantum lotteries to establish a representation theorem. We show that demanding the consistency of choice behavior conditional on new information is equivalent to the (...) von Neumann–Lüders postulate applied to beliefs. A dynamically consistent quantum-like agent may violate dynamic recursive consistency, however. This feature suggests interesting applications in behavioral economics as we illustrate in an example of persuasion. (shrink)