Within traditional decision theory, common decision principles - e.g. the principle to maximize utility -- generally invoke idealization; they govern ideal agents in ideal circumstances. In Realistic Decision Theory, Paul Weirch adds practicality to decision theory by formulating principles applying to nonideal agents in nonideal circumstances, such as real people coping with complex decisions. Bridging the gap between normative demands and psychological resources, Realistic Decision Theory is essential reading for theorists seeking precise normative decision principles that acknowledge the limits and (...) difficulties of human decision-making. (shrink)
In a recent, thought-provoking paper Adam Elga argues against unsharp – e.g., indeterminate, fuzzy and unreliable – probabilities. Rationality demands sharpness, he contends, and this means that decision theories like Levi's, Gärdenfors and Sahlin's, and Kyburg's, though they employ different decision rules, face a common, and serious, problem. This article defends the rule to maximize minimum expected utility against Elga's objection.
The rule to maximize expected utility is intended for decisions where options involve risk. In those decisions the decision maker's attitude toward risk is important, and the rule ought to take it into account. Allais's and Ellsberg's paradoxes, however, suggest that the rule ignores attitudes toward risk. This suggestion is supported by recent psychological studies of decisions. These studies present a great variety of cases where apparently rational people violate the rule because of aversion or attraction to risk. Here I (...) attempt to resolve the issue concerning expected utility and risk. I distinguish two versions of the rule to maximize expected utility. One adopts a broad interpretation of the consequences of an option and has great intuitive appeal. The other adopts a narrow interpretation of the consequences of an option and seems to have certain technical and practical advantages. I contend that the version of the rule that interprets consequences narrowly does indeed neglect attitudes toward risk. That version of the rule excludes the risk involved in an option from the consequences of the option and, contrary to what is usually claimed, cannot make up for this exclusion through adjustments in probability and utility assignments. I construct a new, general argument that establishes this in a rigorous way. On the other hand, I contend that the version of the rule that interprets consequences broadly takes account of attitudes toward risk by counting the risk involved in an option among the consequences of the option. I rebut some objections to this version of the rules, in particular, the objection that the rule lacks practical interest. Drawing upon the literature on 'mean-risk' decision rules, I show that this version of the rule can be used to solve some realistic decision problems. (shrink)
A philosophical account of risk, such as this book provides, states what risk is, which attitudes to it are rational, and which acts affecting risks are rational. Attention to the nature of risk reveals two types of risk, first, a chance of a bad event, and, second, an act’s risk in the sense of the volatility of its possible outcomes. The distinction is normatively significant because different general principles of rationality govern attitudes to these two types of risk. Rationality strictly (...) regulates attitudes to the chance of a bad event and is more permissive about attitudes to an act’s risk. Principles of rationality governing attitudes to risk also justify evaluating an act according to its expected utility given that the act’s risk, if any, belongs to every possible outcome of the act. For a rational ideal agent, the expected utilities of the acts available in a decision problem explain the agent’s preferences among the acts. Maximizing expected utility is just following preferences among the acts. This view takes an act’s expected utility, not just as a feature of a representation of preferences among acts, but also as a factor in the explanation of preferences among acts. The view extends to evaluations of combination of acts, either simultaneous or in a sequence. It takes account of an agent’s attitudes to an act’s risk without weakening the standard of expected-utility maximization. The book’s theory of risk also grounds a justification of the risk-return evaluation of investments that finance advances, and a generalized version of this type of evaluation that a professional may use to advise clients about risks and that a government regulatory agency may use to make decisions about risks on behalf of the public. (shrink)
In some decision problems adoption of an option furnishes evidence about the option's consequences. Rational decisions take account of that evidence, although it makes an option's adoption changes the option's expected utility.
In Decision Space: Multidimensional Utility Analysis, first published in 2001, Paul Weirich increases the power and versatility of utility analysis and in the process advances decision theory. Combining traditional and novel methods of option evaluation into one systematic method of analysis, multidimensional utility analysis is a valuable tool. It provides formulations of important decision principles, such as the principle to maximize expected utility; enriches decision theory in solving recalcitrant decision problems; and provides in particular for the cases in which an (...) expert must make a decision for a group of people. The multiple dimensions of this analysis create a decision space broad enough to accommodate all factors affecting an option's utility. The book will be of interest to advanced students and professionals working in the subject of decision theory, as well as to economists and other social scientists. (shrink)
Causal decision theory attends to probabilities used to obtain an option's expected utility but for completeness should also attend to utilities of possible outcomes. A suitable formula for an option's expected utility uses a certain type of conditional utility.
Causal decision theory produces decision instability in cases such as Death in Damascus where a decision itself provides evidence concerning the utility of options. Several authors have proposed ways of handling this instability. William Harper (1985 and 1986) advances one of the most elegant proposals. He recommends maximizing causal expected utility among the options that are causally ratifiable. Unfortunately, Harper's proposal imposes certain restrictions; for instance, the restriction that mixed strategies are freely available. To obtain a completely general method of (...) handling decision instability, I step outside the confines of pure causal decision theory. I introduce a new kind of backtracking expected utility and propose maximizing it among the options that are causally ratifiable. In other words, I propose a hierarchical maximization of (1) conditional causal expected utility and (2) the new backtracking expected utility. I support this proposal with some intuitive considerations concerning the distinction between optimality and conditional optimality. And I prove that the proposal yields a solution in every finite decision problem. (shrink)
Weirich examines three competing views entertained by economic theory about the instrumental rationality of decisions: the first says to maximize self-interest, the second to maximize utility, and the third to satisfice, that is, to adopt a satisfactory option. Critics argue that the first view is too narrow, that the second overlooks the benefits of teamwork and planning, and that the third, when carefully formulated, reduces to the second. Weirich defends a refined version of the principle to maximize utility. A broad (...) conception of utility makes it responsive to the motives and benefits critics allege it overlooks. He discusses generalizations of utility theory to extend it to nonquantitative cases and other cases with nonstandard features. (shrink)
The options in a decision problem generally have outcomes with common features. Putting aside the common features simplifies deliberations, but the simplification requires a philosophical justification that this book provides.
The conditional probability of h given e is commonly claimed to be equal to the probability that h would have if e were learned. Here I contend that this general claim about conditional probabilities is false. I present a counter-example that involves probabilities of probabilities, a second that involves probabilities of possible future actions, and a third that involves probabilities of indicative conditionals. In addition, I briefly defend these counter-examples against charges that the probabilities they involve are illegitimate.
Expected-utility theory advances representation theorems that do not take the risk an act generates as a consequence of the act. However, a principle of expected-utility maximization that explains the rationality of preferences among acts must, for normative accuracy, take the act’s risk as a consequence of the act if the agent cares about the risk. I defend this conclusion against the charge that taking an act’s consequences to comprehend all the agent cares about trivializes the principle of expected-utility maximization.
How do rational agents coordinate in a single-stage, noncooperative game? Common knowledge of the payoff matrix and of each player's utility maximization among his strategies does not suffice. This paper argues that utility maximization among intentions and then acts generates coordination yielding a payoff-dominant Nash equilibrium. ‡I thank the audience at my paper's presentation at the 2006 PSA meeting for many insightful points. †To contact the author, please write to: Philosophy Department, University of Missouri, Columbia, MO 65211; e-mail: [email protected]
This paper summarizes and rebuts the three standard objections made by social choice theorists against interpersonal utility. The first objection argues that interpersonal utility is measningless. I show that this objection either focuses on irrelevant kinds of meaning or else uses implausible criteria of meaningfulness. The second objection argues that interpersonal utility has no role to play in social choice theory. I show that on the contrary interpersonal utility is useful in formulating goals for social choice. The third objection argues (...) that interpersonal utility in social choice theory can be replaced by clearer notions. I show that the replacements proposed are unsatisfactory in either interpersonal utility's descriptive or explanatory role. My conclusion is that interpersonal utility has a legitimate place in social choice theory. (shrink)
One resolution of the St. Petersburg paradox recognizes that a gamble carries a risk sensitive to the gamble's stakes. If aversion to risk increases sufficiently fast as stakes go up, the St. Petersburg gamble has a finite utility.
Groups of people perform acts that are subject to standards of rationality. A committee may sensibly award fellowships, or may irrationally award them in violation of its own policies. A theory of collective rationality defines collective acts that are evaluable for rationality and formulates principles for their evaluation. This book argues that a group's act is evaluable for rationality if it is the products of acts its members fully control. It also argues that such an act is collectively rational if (...) the acts of the group's members are rational. Efficiency is a goal of collective rationality, but not a requirement, except in cases where conditions are ideal for joint action and agents have rationally prepared for joint action.The people engaged in a game of strategy form a group, and the combination of their acts yields a collective act. If their collective act is rational, it constitutes a solution to their game. A theory of collective rationality yields principles concerning solutions to games. One principle requires that a solution constitute an equilibrium among the incentives of the agents in the game. In a cooperative game some agents are coalitions of individuals, and it may be impossible for all agents to pursue all incentives. Because rationality is attainable, the appropriate equilibrium standard for cooperative games requires that agents pursue only incentives that provide sufficient reasons to act. The book's theory of collective rationality supports an attainable equilibrium-standard for solutions to cooperative games and shows that its realization follows from individuals' rational acts.By extending the theory of rationality to groups, this book reveals the characteristics that make an act evaluable for rationality and the way rationality's evaluation of an act responds to the type of control its agent exercises over the act. The book's theory of collective rationality contributes to philosophical projects such as contractarian ethics and to practical projects such as the design of social institutions. (shrink)
To handle epistemic and pragmatic risks, Gärdenfors and Sahlin design a decision procedure for cases in which probabilities are indeterminate. Their procedure steps outside the traditional expected utility framework. Must it do this? Can the traditional framework handle risk? This paper argues that it can. The key is a comprehensive interpretation of an option's possible outcomes. Taking possible outcomes more broadly than Gärdenfors and Sahlin do, expected utility can give risk its due. In particular, Good's decision procedure adequately handles indeterminate (...) probabilities and the risks they generate. (shrink)
This book represents a major contribution to game theory. It offers this conception of equilibrium in games: strategic equilibrium. This conception arises from a study of expected utility decision principles, which must be revised to take account of the evidence a choice provides concerning its outcome. The argument for these principles distinguishes reasons for action from incentives, and draws on contemporary analyses of counterfactual conditionals. The book also includes a procedure for identifying strategic equilibria in ideal normal-form games. In synthesizing (...) decision theory and game theory in a powerful way this book will be of particular interest to all philosophers concerned with decision theory and game theory as well as economists and other social scientists. (shrink)
An agent's options in a decision problem are best understood as the decisions that the agent might make. Taking options this way eliminates the gap between an option's adoption and its execution.
Standard principles of rational decision assume that an option's utility is both comprehensive and accessible. These features constrain interpretations of an option's utility. This essay presents a way of understanding utility and laws of utility. It explains the relation between an option's utility and its outcome's utility and argues that an option's utility is relative to a specification of the option. Utility's relativity explains how a decision problem's framing affects an option's utility and its rationality even for an agent who (...) is cognitively perfect and lacks only empirical information. The essay rewrites standard laws of utility to accommodate relativization to propositions' specifications. The new laws are generalizations of the standard laws and yield them as special cases. (shrink)
Adam Elga [Elga 2010] argues that no principle of rationality leads from unsharp probabilities to decisions. He concludes that a perfectly rational agent does not have unsharp probabilities. This paper defends unsharp probabilities. It shows how unsharp probabilities may ground rational decisions.
In a decision problem with a dynamic setting there is at least one option whose realization would change the expected utilities of options by changing the probability or utility function with respect to which the expected utilities of options are computed. A familiar example is Newcomb's problem. William Harper proposes a generalization of causal decision theory intended to cover all decision problems with dynamic settings, not just Newcomb's problem. His generalization uses Richard Jeffrey's ideas on ratifiability, and material from game (...) theory on mixed strategies. Harper's proposal has two drawbacks, however. One concerns the mechanism for choosing among ratifiable options. The other concerns the proposal's reliance upon mixed strategies. Here I make another proposal that eliminates these two drawbacks. (shrink)
Groups of people perform acts. For example, a committee passes a resolution, a team wins a game, and an orchestra performs a symphony. These collective acts may be evaluated for rationality. Take a committee’s passing a resolution. This act may be evaluated not only for fairness but also for rationality. Did it take account of all available information? Is the resolution consistent with the committee’s past resolutions? Standards of collective rationality apply to collective acts, that is, acts that groups of (...) people perform. What makes a collective act evaluable for rationality? What methods of evaluation apply to collective acts? This paper addresses these two questions. Collective rationality is rationality’s extension from individuals to groups. The paper’s first few sections review key points about rationality. They identify the features of an individual’s act that make it evaluable for rationality and distinguish rationality’s methods of evaluating acts directly and indirectly controlled. This preliminary work yields general principles of rationality for all agents, both individuals and groups. Applying the general principles to groups answers the paper’s two main questions about collective rationality. (shrink)
The received view of framing has multiple interpretations. I flesh out an interpretation that is more open-minded about framing effects than the extensionality principle that Bermúdez formulates. My interpretation attends to the difference between preferences held all things considered and preferences held putting aside some considerations. It also makes room for decision principles that handle cases without a complete all-things-considered preference-ranking of options.
An agent in a decision problem may not know the goals that should guide selection of an option. Accommodating this ignorance require methods that supplement expected utility theory.
Mark Kaplan proposes amending decision theory to accommodate better cases in which an agent's probability assignment is imprecise. The review describes and evaluates his proposals.
J. Howard Sobel has long been recognized as an important figure in philosophical discussions of rational decision. He has done much to help formulate the concept of causal decision theory. In this volume of essays Sobel explores the Bayesian idea that rational actions maximize expected values, where an action's expected value is a weighted average of its agent's values for its possible total outcomes. Newcomb's Problem and The Prisoner's Dilemma are discussed, and Allais-type puzzles are viewed from the perspective of (...) causal world Bayesianism. The author establishes principles for distinguishing options in decision problems, and studies ways in which perfectly rational causal maximizers can be capable of resolute choices. Sobel also views critically Gauthier's revisionist ideas about maximizing rationality. This collection will be a desideratum for anyone working in the field of rational choice theory, whether in philosophy, economics, political science, psychology or statistics. Howard Sobel's work in decision theory is certainly among the most important, interesting and challenging that is being done by philosophers. (shrink)
Abner Shimony argues that degrees of belief satisfy the axioms of probability because their epistemic goal is to match estimates of objective probabilities. Because the estimates obey the axioms of probability, degrees of belief must also obey them to reach their epistemic goal. This calibration argument meets some objections, but with a few revisions it can surmount those objections. It offers a good alternative to the Dutch book argument for compliance with the probability axioms. The defense of Shimony's calibration argument (...) examines rational pursuit of an epistemic goal, introduces strength of evidence and its measurement, and distinguishes epistemic goals and functions. (shrink)
Food products with genetically modified ingredients are common, yet many consumers are unaware of this. When polled, consumers say that they want to know whether their food contains GM ingredients, just as many want to know whether their food is natural or organic. Informing consumers is a major motivation for labeling. But labeling need not be mandatory. Consumers who want GM-free products will pay a premium to support voluntary labeling. Why do consumers want to know about GM ingredients? GM foods (...) are tested to ensure safety and have been on the market for more than a decade. Still, many consumers, including some with food allergies, want to be cautious. Also, GM crops may affect neighboring plants through pollen drift. Despite tests for environmental impact, some consumers may worry that GM crops will adversely effect the environment. The study of risk and its management raises questions not settled by the life sciences alone. This book surveys various labeling policies and the cases for them. It is the first comprehensive, interdisciplinary treatment of the debate about labeling genetically modified food. The contributors include philosophers, bioethicists, food and agricultural scientists, attorneys/legal scholars, and economists. (shrink)
This collection treats classic problems in decision theory such as Newcomb's Problem and the Prisoner's Dilemma. The reviews describes and evaluates the essays.
An agent often does not have precise probabilities or utilities to guide resolution of a decision problem. I advance a principle of rationality for making decisions in such cases. To begin, I represent the doxastic and conative state of an agent with a set of pairs of a probability assignment and a utility assignment. Then I support a decision principle that allows any act that maximizes expected utility according to some pair of assignments in the set. Assuming that computation of (...) an option's expected utility uses comprehensive possible outcomes that include the option's risk, no consideration supports a stricter requirement. (shrink)