The paper presents and discusses the so-called Wrong Kind of Reasons Problem (WKR problem) that arises for the fitting-attitudes analysis of value. This format of analysis is exemplified for example by Scanlon's buck-passing account, on which an object's value consists in the existence of reasons to favour the object- to respond to it in a positive way. The WKR problem can be put as follows: It appears that in some situations we might well have reasons to have pro-attitudes toward objects (...) that are not valuable. Or vice versa: we might have reasons not to have pro-attitudes toward some valuable objects. The paper goes through several attempts to solve (or dissolve) the WKR problem and argues that none of them is fully satisfactory. (shrink)
Abstract: The paper provides a general account of value relations. It takes its departure in a special type of value relation, parity, which according to Ruth Chang is a form of evaluative comparability that differs from the three standard forms of comparability: betterness, worseness and equal goodness. Recently, Joshua Gert has suggested that the notion of parity can be accounted for if value comparisons are interpreted as normative assessments of preference. While Gert's basic idea is attractive, the way he develops (...) it is flawed: His modeling of values by intervals of permissible preference strengths is inadequate. Instead, I provide an alternative modeling in terms of intersections of rationally permissible preference orderings. This yields a general taxonomy of all binary value relations. The paper concludes with some implications of this approach for rational choice. (shrink)
The paper argues that the final value of an object-i.e., its value for its own sake-need not be intrinsic. Extrinsic final value, which accrues to things (or persons) in virtue of their relational rather than internal features, cannot be traced back to the intrinsic value of states that involve these things together with their relations. On the contrary, such states, insofar as they are valuable at all, derive their value from the things involved. The endeavour to reduce thing-values to state-values (...) is largely motivated by a mistaken belief that appropriate responses to value must consist in preferring and/or promoting. A pluralist approach to value analysis obviates the need for reduction: the final value of a thing or person can be given an independent interpretation in terms of the appropriate thing- or person-oriented responses: admiration, love, respect, protection, care, cherishing, etc. (shrink)
This paper addresses a problem for theories of epistemic democracy. In a decision on a complex issue which can be decomposed into several parts, a collective can use different voting procedures: Either its members vote on each sub-question and the answers that gain majority support are used as premises for the conclusion on the main issue, or the vote is conducted on the main issue itself. The two procedures can lead to different results. We investigate which of these procedures is (...) better as a truth-tracker, assuming that there exists a true answer to be reached. On the basis of the Condorcet jury theorem, we show that the pbp is universally superior if the objective is to reach truth for the right reasons. If one instead is after truth for whatever reasons, right or wrong, there will be cases in which the cbp is more reliable, even though, for the most part, the pbp still is to be preferred. (shrink)
The paper argues that the final value of an object-i.e., its value for its own sake-need not be intrinsic. Extrinsic final value, which accrues to things in virtue of their relational rather than internal features, cannot be traced back to the intrinsic value of states that involve these things together with their relations. On the contrary, such states, insofar as they are valuable at all, derive their value from the things involved. The endeavour to reduce thing-values to state-values is largely (...) motivated by a mistaken belief that appropriate responses to value must consist in preferring and/or promoting. A pluralist approach to value analysis obviates the need for reduction: the final value of a thing or person can be given an independent interpretation in terms of the appropriate thing- or person-oriented responses: admiration, love, respect, protection, care, cherishing, etc. (shrink)
Suppose that A and B are two kinds of goods such that more of each is better than less. A is strongly superior to B if any amount of A is better than any amount of B. It is weakly superior to B if some amount of A is better than any amount of B. There are many examples of these relations in the literature, sometimes under the labels “higher goods” and “discontinuity.” The chapter gives a precise and generalized statement (...) of Strong and Weak Superiority and discusses different ways in which these relations can be relevant to the aggregation of welfare. It also proves a number of general results. One of the results gives rise to a dilemma: It can be used as an argument against the existence of value superiority or, alternatively, as an argument against the view that superiority entails a radical difference in value. (shrink)
In Rabinowicz, I considered how value relations can best be analysed in terms of fitting pro-attitudes. In the formal model of that paper, fitting pro-attitudes are represented by the class of permissible preference orderings on a domain of items that are being compared. As it turns out, this approach opens up for a multiplicity of different types of value relationships, along with the standard relations of ‘better’, ‘worse’, ‘equally as good as’ and ‘incomparable in value’. Unfortunately, the approach is vulnerable (...) to a number of objections. I believe these objections can be avoided if one re-interprets the underlying notion of preference: instead of treating preference as a ‘dyadic’ attitude directed towards a pair of items, we can think of it as a difference of degree between ‘monadic’ attitudes of favouring. Each such monadic attitude has just one item as its object. Given this re-interpretation, permissible preferences can be modelled by the class of permissible assignments of degrees of favouring to items in the domain. From this construction, we can then recover the old modelling in terms of the class of permissible preference orderings, but the previous objections to that model no longer apply. (shrink)
Can it be better or worse for a person to exist than not to exist at all? This old and challenging existential question has been raised anew in contemporary moral philosophy, mainly for two reasons. First, traditional “impersonal” ethical theories, such as utilitarianism, have counterintuitive implications in population ethics, for example, the repugnant conclusion. Second, it has seemed evident to many that an outcome can be better than another only if it is better for someone, and that only moral theories (...) that are in this sense “person affecting” can be correct. The implications of this Person-Affecting Restriction will differ radically, however, depending on which answer one gives to the existential question. The negative answer, which we argue against, would make the restriction quite untenable. Hence, many of the problems regarding our moral duties to future generations turn around the issue at hand. (shrink)
The ‘buck-passing’ account equates the value of an object with the existence of reasons to favour it. As we argued in an earlier paper, this analysis faces the ‘wrong kind of reasons’ problem: there may be reasons for pro-attitudes towards worthless objects, in particular if it is the pro-attitudes, rather than their objects, that are valuable. Jonas Olson has recently suggested how to resolve this difficulty: a reason to favour an object is of the right kind only if its formulation (...) does not involve any reference to the attitudes for which it provides a reason. We argue that despite its merits, Olson's solution is unsatisfactory. We go on to suggest that the buck-passing account might be acceptable even if the problem in question turns out to be insoluble. (shrink)
In earlier papers (Lindström & Rabinowicz, 1989. 1990), we proposed a generalization of the AGM approach to belief revision. Our proposal was to view belief revision as a relation rather thanas a function on theories (or belief sets). The idea was to allow for there being several equally reasonable revisions of a theory with a given proposition. In the present paper, we show that the relational approach is the natural result of generalizing in a certain way an approach to belief (...) revision due to Adam Grove. In his (1988) paper, Grove presents two closely related modelings of functional belief revision, one in terms of a family of "spheres" around the agent's theory G and the other in terms of an epistemic entrenchment ordering of propositions. The "sphere"-terminology is natural when one looks upon theories and propositions as being represented by sets of possible worlds. Grove's spheres may be thought of as possible "fallback" theories relative to the agent's original theory: theories that he may reach by deleting propositions that are not "sufficiently" entrenched (according to standards of sufficient entrenchment of varying stringency). To put it differently, fallbacks are theories that are closed upwards under entrenchment The entrenchment ordering can be recovered from the family of fallbacks by the definition: A is at least as entrenched as B iff A belongs to every fallback to which B belongs. To revise a theory T with a proposition A, we go to the smallest sphere that contain A-worlds and intersect it with A. The relational notion of belief revision that we are interested in, results from weakening epistemic entrenchment by not assuming it to be connected. I.e., we want to allow that some propositions may be incomparable with respect to epistemic entrenchment. As a result, the family of fallbacks around a given theory will no longer have to be nested. This change opens up the possibility for several different ways of revising a theory with a given proposition. (shrink)
The spectrum argument purports to show that the better-than relation is not transitive, and consequently that orthodox value theory is built on dubious foundations. The argument works by constructing a sequence of increasingly less painful but more drawn-out experiences, such that each experience in the spectrum is worse than the previous one, yet the final experience is better than the experience with which the spectrum began. Hence the betterness relation admits cycles, threatening either transitivity or asymmetry of the relation. This (...) paper examines recent attempts to block the spectrum argument, using the idea that it is a mistake to affirm that every experience in the spectrum is worse than its predecessor: an alternative hypothesis is that adjacent experiences may be incommensurable in value, or that due to vagueness in the underlying concepts, it is indeterminate which is better. While these attempts formally succeed as responses to the spectrum argument, they have additional, as yet unacknowledged costs that are significant. In order to effectively block the argument in its most typical form, in which the first element is radically inferior to the last, it is necessary to suppose that the incommensurability is particularly acute: what might be called radical incommensurability. We explain these costs, and draw some general lessons about the plausibility of the available options for those who wish to save orthodox axiology from the spectrum argument. (shrink)
In “Weighing Lives” (2004) John Broome criticizes a view common to many population axiologists. On that view, population increases with extra people leading decent lives are axiologically neutral: they make the world neither better nor worse, ceteris paribus. Broome argues that this intuition, however, attractive, cannot be sustained, for several independent reasons. I respond to his criticisms and suggest that the neutrality intuition, if correctly interpreted, can after all be defended.On the version I defend,the world with added extra people at (...) wellbeing levels within the neutrality range is incommensurable in value with the world in which these peaople are absent. (shrink)
This paper casts doubts on John Broome's view that vagueness in value comparisons crowds out incommensurability in value. It shows how vagueness can be imposed on a formal model of value relations that has room for different types of incommensurability. The model implements some basic insights of the 'fitting attitudes' analysis of value.
One might think that money pumps directed at agents with cyclic preferences can be avoided by foresight. This view was challenged two decades ago by the discovery of a money pump with foresight, which works against agents who use backward induction. But backward induction implausibly assumes that the agent would act rationally and retain her trust in her future rationality even at choice nodes that could only be reached if she were to act irrationally. This worry does not apply to (...) BI-terminating decision problems, where at each choice node backward induction prescribes a move that terminates further action. For BI-terminating decision problems, it is enough to assume that rationality and trust in rationality are retained at choice nodes reachable by rational moves. The old money pump with foresight was not BI-terminating. In this paper, we present a new money pump with foresight, one that is both BI-terminating and considerably simpler. (shrink)
The paper argues that the final value of an object, i.e., its value for its own sake, need not be intrinsic. It need not supervene on the object’s internal properties. Extrinsic final value, which accrues to things in virtue of their relational features, cannot be traced back to the intrinsic value of states that involve these things together with their relations. On the opposite, such states, insofar as they are valuable at all, derive their value from the things involved. The (...) endeavour to reduce thing-values to state-values is largely motivated by a mistaken belief that appropriate responses to value must consist in preferring and/or promoting. A pluralist approach to value analysis obviates the need for reduction: the final value of a thing or a person can be given an independent interpretation in terms of the appropriate thing- or person-oriented responses: admiration, love, respect, protection, cherishing, etc. (shrink)
The Interpersonal Addition Theorem, due to John Broome, states that, given certain seemingly innocuous assumptions, the overall utility of an uncertain prospect can be represented as the sum of its individual utilities. Given ‘Bernoulli's hypothesis’ according to which individual utility coincides with individual welfare, this result appears to be incompatible with the Priority View. On that view, due to Derek Parfit, the benefits to the worse off should count for more, in the overall evaluation, than the comparable benefits to the (...) better off. Pace Broome, the paper argues that prioritarians should meet this challenge not by denying Bernoulli's hypothesis, but by rejecting one of the basic assumptions behind the addition theorem: that a prospect is better overall if it is better for everyone. This conclusion follows if one interprets the priority weights that are imposed by prioritarians as relevant only to moral, but not to prudential, evaluations of prospects. (shrink)
What distinguishes preference utilitarianism from other utilitarian positions is the axiological component: the view concerning what is intrinsically valuable. According to PU, intrinsic value is based on preferences. Intrinsically valuable states are connected to our preferences being satisfied.
The theories of belief change developed within the AGM-tradition are not logics in the proper sense, but rather informal axiomatic theories of belief change. Instead of characterizing the models of belief and belief change in a formalized object language, the AGM-approach uses a natural language — ordinary mathematical English — to characterize the mathematical structures that are under study. Recently, however, various authors such as Johan van Benthem and Maarten de Rijke have suggested representing doxastic change within a formal logical (...) language: a dynamic modal logic. Inspired by these suggestions Krister Segerberg has developed a very general logical framework for reasoning about doxastic change: dynamic doxastic logic (DDL). This framework may be seen as an extension of standard Hintikka-style doxastic logic with dynamic operators representing various kinds of transformations of the agent's doxastic state. Basic DDL describes an agent that has opinions about the external world and an ability to change these opinions in the light of new information. Such an agent is non-introspective in the sense that he lacks opinions about his own belief states. Here we are going to discuss various possibilities for developing a dynamic doxastic logic for introspective agents: full DDL or DDL unlimited. The project of constructing such a logic is faced with difficulties due to the fact that the agent’s own doxastic state now becomes a part of the reality that he is trying to explore: when an introspective agent learns more about the world, then the reality he holds beliefs about undergoes a change. But then his introspective (higher-order) beliefs have to be adjusted accordingly. In the paper we shall consider various ways of solving this problem. (shrink)
The well-known argument of Frederick Fitch, purporting to show that verificationism (= Truth implies knowability) entails the absurd conclusion that all the truths are known, has been disarmed by Dorothy Edgington''s suggestion that the proper formulation of verificationism presupposes that we make use of anactuality operator along with the standardly invoked epistemic and modal operators. According to her interpretation of verificationism, the actual truth of a proposition implies that it could be known in some possible situation that the proposition holds (...) in theactual situation. Thus, suppose that our object language contains the operatorA — it is actually the case that ... — with the following truth condition: vA iff w0, wherew 0 stands for the designated world of the model — the actual world. Then we can formalize the verificationist claim as follows. (shrink)
This paper addresses a problem for theories of epistemic democracy. In a decision on a complex issue which can be decomposed into several parts, a collective can use different voting procedures: Either its members vote on each sub-question and the answers that gain majority support are used as premises for the conclusion on the main issue, or the vote is conducted on the main issue itself. The two procedures can lead to different results. We investigate which of these procedures is (...) better as a truth-tracker, assuming that there exists a true answer to be reached. On the basis of the Condorcet jury theorem, we show that the pbp is universally superior if the objective is to reach truth for the right reasons. If one instead is after truth for whatever reasons, right or wrong, there will be cases in which the cbp is more reliable, even though, for the most part, the pbp still is to be preferred. (shrink)
It is a popular view thatpractical deliberation excludes foreknowledge of one's choice. Wolfgang Spohn and Isaac Levi have argued that not even a purely probabilistic self-predictionis available to thedeliberator, if one takes subjective probabilities to be conceptually linked to betting rates. It makes no sense to have a betting rate for an option, for one's willingness to bet on the option depends on the net gain from the bet, in combination with the option's antecedent utility, rather than on the offered (...) odds. And even apart from this consideration, assigning probabilities to the options among which one is choosing is futile since such probabilities could be of no possible use in choice. The paper subjects these arguments to critical examination and suggests that, appearances notwithstanding, practical deliberation need not crowd outself-prediction. (shrink)
The paper argues that the final value of an object-i.e., its value for its own sake-need not be intrinsic. Extrinsic final value, which accrues to things (or persons) in virtue of their relational rather than internal features, cannot be traced back to the intrinsic value of states that involve these things together with their relations. On the contrary, such states, insofar as they are valuable at all, derive their value from the things involved. The endeavour to reduce thing-values to state-values (...) is largely motivated by a mistaken belief that appropriate responses to value must consist in preferring and/or promoting. A pluralist approach to value analysis obviates the need for reduction: the final value of a thing or person can be given an independent interpretation in terms of the appropriate thing- or person-oriented responses: admiration, love, respect, protection, care, cherishing, etc. (shrink)
Suppose one sets up a sequence of less and less valuable objects such that each object in the sequence is only marginally worse than its immediate predecessor. Could one in this way arrive at something that is dramatically inferior to the point of departure? It has been claimed that if there is a radical value difference between the objects at each end of the sequence, then at some point there must be a corresponding radical difference between the adjacent elements. The (...) underlying picture seems to be that a radical gap cannot be scaled by a series of steps, if none of the steps itself is radical. We show that this picture is incorrect on a stronger interpretation of value superiority, but correct on a weaker one. Thus, the conclusion we reach is that, in some sense at least, abrupt breaks in such decreasing sequences cannot be avoided, but that such unavoidable breaks are less drastic than has been suggested. In an appendix written by John Broome and Wlodek Rabinowicz, the distinction between two kinds of value superiority is extended from objects to their attributes. (shrink)
The paper’s target is the historically influential betting interpretation of subjective probabilities due to Ramsey and de Finetti. While there are several classical and well-known objections to this interpretation, the paper focuses on just one fundamental problem: There is a sense in which degrees of belief cannot be interpreted as betting rates. The reasons differ in different cases, but there’s one crucial feature that all these cases have in common: The agent’s degree of belief in a proposition A does not (...) coincide with her degree of belief in a conditional that A would be the case if she were to bet on A, where the belief in this conditional itself is conditioned on the supposition that the agent will have an opportunity to make such a bet. Even though the two degrees of belief sometimes can coincide (they will coincide in those cases when the bet has no expected causal bearings on the proposition A and the opportunity to bet have no evidential bearings on that proposition), it is the latter belief rather than the former that guides the agent’s rational betting behaviour. The reason is that this latter belief takes into consideration potential interferences that bet opportunities and betting itself might create with regard to the proposition to be betted on. It is because of this interference problem that the agent’s degree of belief in A cannot be interpreted as her betting rate for A. (shrink)
This paper addresses a problem for theories of epistemic democracy. In a decision on a complex issue which can be decomposed into several parts, a collective can use different voting procedures: Either its members vote on each sub-question and the answers that gain majority support are used as premises for the conclusion on the main issue, or the vote is conducted on the main issue itself. The two procedures can lead to different results. We investigate which of these procedures is (...) better as a truth-tracker, assuming that there exists a true answer to be reached. On the basis of the Condorcet jury theorem, we show that the pbp is universally superior if the objective is to reach truth for the right reasons. If one instead is after truth for whatever reasons, right or wrong, there will be cases in which the cbp is more reliable, even though, for the most part, the pbp still is to be preferred. (shrink)
The authors of this paper earlier argued that concrete objects, such as things or persons, may have final value, which is not reducible to the value of states of affairs that concern the object in question. Our arguments have been challenged. This paper is an attempt to respond to some of these challenges, viz. those that concern the reducibility issue. The discussion pre-supposes a Brentano-inspired account of value in terms of fitting responses to value bearers. Attention is given to a (...) yet another type of reduction proposal, according to which the ultimate bearers of final value are abstract particulars rather than abstract states or facts. While the proposal is attractive, it confronts serious difficulties. To recognise tropes as potential bearers of final value, along with other objects, is one thing; but to reduce the final value of concrete objects to the final value of tropes is another matter. (shrink)
An agent whose preferences violate the Independence Axiom or for some other reason are not representable by an expected utility function, can avoid 'dynamic inconsistency' either by foresight ('sophisticated choice') or by subsequent adjustment of preferences to the chosen plan of action ('resolute choice'). Contrary to McClennen and Machina, among others, it is argued these two seemingly conflicting approaches to 'dynamic rationality' need not be incompatible. 'Wise choice' reconciles foresight with a possibility of preference adjustment by rejecting the two assumptions (...) that create the conflict: Separability of Preferences in the case of sophisticated choice and Reduction to Normal form in the case of resolute choice.. (shrink)
Gert (2004) has suggested that several different types of value relations, including parity, can be clearly distinguished from each other if one interprets value comparisons as normative assessments of preference, while allowing for two levels of normativity - requirement and permission. While this basic idea is attractive, the particular modeling Gert makes use of is flawed. This paper presents an alternative modeling, developed in Rabinowicz (2008), and a general taxonomy of binary value relations. Another version of value analysis is then (...) brought in, which appeals to appropriate emotions rather than preferences. It is also shown what the modeling of value relations would look like from such an emotion-centered perspective. The preference-based and the emotion-based approaches differ importantly from each other, but they give rise to isomorphic taxonomies. (shrink)
The standard backward-induction reasoning in a game like the centipede assumes that the players maintain a common belief in rationality throughout the game. But that is a dubious assumption. Suppose the first player X didn't terminate the game in the first round; what would the second player Y think then? Since the backwards-induction argument says X should terminate the game, and it is supposed to be a sound argument, Y might be entitled to doubt X's rationality. Alternatively, Y might doubt (...) that X believes Y is rational, or that X believes Y believes X is rational, or Y might have some higher-order doubt. X’s deviant first move might cause a breakdown in common belief in rationality, therefore. Once that goes, the entire argument fails. The argument also assumes that the players act rationally at each stage of the game, even if this stage could not be reached by rational play. But it is also dubious to assume that past irrationality never exerts a corrupting influence on present play. However, the backwards-induction argument can be reconstructed for the centipede game on a more secure basis.1 It may be implausible to assume a common belief in rationality throughout the game, however the game might go, but the argument requires less than this. The standard idealisations in game theory certainly allow us to assume a common belief in rationality at the beginning of the game. They also allow us to assume this common belief persists so long as no one makes an irrational move. That is enough for the argument to go through. (shrink)
I describe in section 1 how cyclical preferences can arise. In section 2, I relate preference to judgments of choiceworthiness and distinguish between two kinds of preference cycles, vicious and benign. In section 3, I run through the standard money pump in order to show, in section 4, how this pump can be stopped by foresight, using backward induction. A new money pump that *cannot* be stopped by foresight is presented in section 5. This pump works even for agents with (...) benign cyclical preferences. What makes it work is persistency on the part of the would-be exploiter. In section 6, I compare this pump to a diachronic Dutch book that can be set up against someone whose probability assignments violate Reflection. Even in this case, the book only works if the bookie is assumed to be persistent. I use this comparison between preference cyclicity and violations of Reflection in order to question whether exploitability must be seen as a proof of irrationality. Finally, in section 7, I consider resolute choice as an alternative to the backward-induction procedure. While a resolute chooser cannot be exploited, I argue that resoluteness is not required by rationality. The argument is based on a suggestion that rationality, when it comes to actions, is a local rather than a global requirement. (shrink)
The paper presents main conceptual distinctions underlying much of modern philosophical thinking about value. The introductory Section 1 is followed in Section 2 by an outline of the contrast between non-relational value and relational value. In Section 3, the focus is on the distinction between final and non-final value as well as on different kinds of final value. In Section 4, we consider value relations, such as being better/worse/equally good/on a par. Recent discussions suggest that we might need to considerably (...) extend traditional taxonomies of value relations. (shrink)
According to the fitting-attitude analysis of value , to be valuable is to be a fitting object of a pro-attitude. In earlier publications, setting off from this format of analysis, I proposed a modelling of value relations which makes room for incommensurability in value. In this paper, I first recapitulate the value modelling and then move on to suggest adopting a structurally similar analysis of probability. Indeed, many probability theorists from Poisson onwards did adopt an analysis of this kind. This (...) move allows to formally model probability and probability relations in essentially the same way as value and value relations. One of the advantages of the model is that we get a new account of Keynesian incommensurable probabilities, which goes beyond Keynes in distinguishing between different types of incommensurability. It also becomes possible to draw a clear distinction between incommensurability and vagueness in probability comparisons. (shrink)
According to the standard objection to backward induction in games, its application depends on highly questionable assumptions about the players' expectations as regards future counterfactual game developments. It seems that, in order to make predictions needed for backward reasoning, the players must expect each player to act rationally at each node that in principle could be reached in the game, and also to expect that this confidence in the future rationality of the players would be kept by each player come (...) what may: even at the game-nodes that could only be reached by irrational play. Both expectations seem to be rather unreasonable: a player's initial disposition to rational behaviour may be weakened by a long stretch of irrational play on his part and, even more importantly, his initial confidence in the other players' future rationality may be undermined by an irrational play on their part. For different formulations of this objection see Binmore, Reny and, Bicchieri, Pettit and Sugden. and Aumann and.). (shrink)
Suppose one sets up a sequence of less and less valuable objects such that each object in the sequence is only marginally worse than its immediate predecessor. Could one in this way arrive at something that is dramatically inferior to the point of departure? It has been claimed that if there is a radical value difference between the objects at each end of the sequence, then at some point there must be a corresponding radical difference between the adjacent elements. The (...) underlying picture seems to be that a radical gap cannot be scaled by a series of steps, if none of the steps itself is radical. We show that this picture is incorrect on a stronger interpretation of value superiority, but correct on a weaker one. Thus, the conclusion we reach is that, in some sense at least, abrupt breaks in such decreasing sequences cannot be avoided, but that such unavoidable breaks are less drastic than has been suggested. In an appendix written by John Broome and Wlodek Rabinowicz, the distinction between two kinds of value superiority is extended from objects to their attributes. (shrink)
This paper addresses a problem for theories of epistemic democracy. In a decision on a complex issue which can be decomposed into several parts, a collective can use different voting procedures: Either its members vote on each sub-question and the answers that gain majority support are used as premises for the conclusion on the main issue, or the vote is conducted on the main issue itself. The two procedures can lead to different results. We investigate which of these procedures is (...) better as a truth-tracker, assuming that there exists a true answer to be reached. On the basis of the Condorcet jury theorem, we show that the pbp is universally superior if the objective is to reach truth for the right reasons. If one instead is after truth for whatever reasons, right or wrong, there will be cases in which the cbp is more reliable, even though, for the most part, the pbp still is to be preferred. (shrink)
Consider a transitive value ordering of outcomes and lotteries on outcomes, which satisfies substitutivity of equivalents and obeys “continuity for easy cases,” i.e., allows compensating risks of small losses by chances of small improvements. Temkin (2001) has argued that such an ordering must also – rather counter-intuitively – allow chances of small improvements to compensate risks of huge losses. In this paper, we show that Temkin's argument is flawed but that a better proof is possible. However, it is more difficult (...) to determine what conclusions should be drawn from this result. Contrary to what Temkin suggests, substitutivity of equivalents is a notoriously controversial principle. But even in the absence of substitutivity, the counter-intuitive conclusion is derivable from a strengthened version of continuity for easy cases. The best move, therefore, might be to question the latter principle, even in its original simple version: as we argue, continuity for easy cases gives rise to a sorites. (shrink)
Free will is widely thought to require (i) the possibility of acting otherwise and (ii) the intentional endorsement of one’s actions (“indeterministic picking is not enough”). According to (i), a necessary condition for free will is agential-level indeterminism: at some points in time, an agent’s prior history admits more than one possible continuation. According to (ii), however, a free action must be intentionally endorsed, and indeterminism may threaten freedom: if several alternative actions could each have been actualized, then none of (...) them is necessitated by the agent’s prior history, and the actual action seems nothing more than the result of indeterministic picking. We argue that this tension is only apparent. We distinguish between actions an agent can possibly do and actions he or she can do with endorsement. One can consistently say that someone who makes a choice has several alternative possibilities, and yet that, far from merely indeterministically picking an action, the agent chooses one he or she endorses. An implication is that although free will can consistently require (i) and (ii), it cannot generally require the possibility of acting otherwise with endorsement. (shrink)
The papers focuses on pragmatic arguments for various rationality constraints on a decision maker’s state of mind: on her beliefs or preferences. An argument of this kind typically targets constraint violations. It purports to show that a violator of a given constraint can be confronted with a decision problem in which she will act to her guaranteed disadvantage. Dramatically put, she can be exploited by a clever bookie who doesn’t know more than the agent herself. Examples of pragmatic arguments of (...) this kind are synchronic Dutch Books, for the standard probability axioms, diachronic Dutch Books, for the more controversial principles of reflection and conditionalization, and Money Pumps, for the acyclicity requirement on preferences. The paper suggests that the proposed exploitation set-ups share a common feature. If the violator of a given constraint is logically and mathematically competent, and if she prefers to be better off rather than worse off, she can be exploited only if she is disunified in her decision-making, i.e. only if she makes decisions on various issues she faces separately rather than jointly. Unification in decision making is relatively unproblematic in synchronic contexts, but it may be costly and inconvenient diachronically. On this view, therefore, pragmatic arguments should be seen as delivering conditional recommendations: If you want to afford disunification, then you’d better satisfy these constraints. They identify safeguards of a disunified mind. Isaac Levi’s position on these matters is diametrically different. According to Levi, only synchronic pragmatic arguments are valid. The diachronic ones, he argues, lack any validity at all. This line of reasoning is questioned in the paper. (shrink)
The Puzzle of the Hats is a puzzle in social epistemology. It describes a situation in which a group of rational agents with common priors and common goals seems vulnerable to a Dutch book if they are exposed to different information and make decisions independently. Situations in which this happens involve violations of what might be called the Group-Reflection Principle. As it turns out, the Dutch book is flawed. It is based on the betting interpretation of the subjective probabilities, but (...) ignores the fact that this interpretation disregards strategic considerations that might influence betting behavior. A lesson to be learned concerns the interpretation of probabilities in terms of fair bets and, more generally, the role of strategic considerations in epistemic contexts. Another lesson concerns Group-Reflection, which in its unrestricted form is highly counter-intuitive. We consider how this principle of social epistemology should be re-formulated so as to make it tenable. (shrink)
The authors of this paper earlier argued that concrete objects, such as things or persons, may have final value , which is not reducible to the value of states of affairs that concern the object in question.Our arguments have been challenged. This paper is an attempt to respond to some of these challenges, viz. those that concern the reducibility issue. The discussion presupposes a Brentano-inspired account of value in terms of fitting responses to value bearers. Attention is given to a (...) yet another type of reduction proposal, according to which the ultimate bearers of final value are abstract particulars rather than abstract states or facts. While the proposal is attractive , it confronts serious difficulties. To recognise tropes as potential bearers of final value, along with other objects, is one thing; but to reduce the final value of concrete objects to the final value of tropes is another matter. (shrink)
This paper argues that expected utility theory for actions in chancy environments should be formulated in terms of centered chances. The subjective expected utility of an option A may be seen as a weighted sum of the utilities of A in different possible worlds, with weights being the credences that the agent assigns to these worlds. The utility of A in a given world is then definable as a weighted sum of the values of A’s different possible outcomes, with weights (...) being the conditional chances of these outcomes if A were performed. On the centered-chance view, the chances to be used as weights in the definition of utility are centered. Unlike ordinary chances, centered chances depend not only on what happens prior to the agent’s choice but also on the events that occur after the choice. Thus, to give an example, suppose that the action under consideration results in a bad outcome due to some event whose ordinary chance of occurring was very low at the time of choice. Then the utility of that action in the actual world could be high on the non-centered view, but on the centered view that utility is negative, since the centered chance of the event in question given the action was one, given that it did actually take place. A precise definition of centered chances is not easy to frame, but the concept can be made intuitively clear. The resulting decision theory is, in my opinion, philosophically more satisfactory than the extant proposals, even though it doesn’t differ much in its practical recommendations, with the exception of some rather peculiar cases. (shrink)
This paper puts forward the following claims: (i) The size of inequality in welfare should be distinguished from its badness. (ii) The size of a pairwise inequality between two individuals can be measured by the absolute or the relative welfare distance between their welfare levels, but it does not depend on the welfare levels of other individuals. (iii) The size of inequality in a social state may be understood either as the degree of pairwise inequality or as its amount. (iv) (...) The badness of a pairwise inequality may differ from its size in several ways; for example, the badness measure might go by the distance between priority-transformed welfare levels and/or it might assign heavier weight to larger distances. (v) The badness of a pairwise inequality may be either personal or impersonal, with the personal interpretation being internally consistent and, pace Temkin, independently tenable even if we reject the so-called Slogan (i.e., the Person-Affecting Claim). (vi) The aggregation procedure by which we arrive from the badness of pairwise inequalities to the badness of the inequality in a social state takes different forms depending on whether the badness of a pairwise inequality is interpreted in a personal or in an impersonal way. (vii) Since Temkin’s complaint-based measures of the badness of inequality follow the format appropriate for the personal interpretation, they seem out of place if one, like him, treats the badness of inequality as an impersonal value. (shrink)
I begin, in section 1, with a presentation of the Interpersonal Addition Theorem. The theorem, due to John Broome (1991), is a re-formulation of the classical result by Harsanyi (1955). It implies that, given some seemingly mild assumptions, the overall utility of an uncertain prospect can be seen as the sum of its individual utilities. In sections 1 and 2, I discuss the theorem's connection with utilitarianism and in particular consider its implications for the Priority View, according to which benefits (...) to the worse off count for more, in terms of overall utility, than comparable benefits to the better off (cf. Parfit 1995 [1991]). Broome (1991) and Klint Jensen (1996) have argued that, in view of the Interpersonal Addition Theorem, the Priority View should be rejected for measurement-theoretical reasons. Therefore, it cannot be seen as a plausible competitor to utilitarianism (cf. section 1). I will suggest, however, that a proponent of the Priority View would be well-advised, on independent grounds, to reject one of the basic assumptions on which the Addition Theorem is based. I have in mind the so-called Principle of Personal Good for uncertain prospects (cf. sections 4 and 5). If the theorem is disarmed in this way, then, as a side benefit, the Priority View will avoid the afore-mentioned problems with measurement. (shrink)
This paper argues that expected utility theory for actions in chancy environments should be formulated in terms of centered chances. The subjective expected utility of an option A may be seen as a weighted sum of the utilities of A in different possible worlds, with weights being the credences that the agent assigns to these worlds. The utility of A in a given world is then definable as a weighted sum of the values of A’s different possible outcomes, with weights (...) being the conditional chances of these outcomes if A were performed. On the centered-chance view, the chances to be used as weights in the definition of utility are centered. Unlike ordinary chances, centered chances depend not only on what happens prior to the agent’s choice but also on the events that occur after the choice. Thus, to give an example, suppose that the action under consideration results in a bad outcome due to some event whose ordinary chance of occurring was very low at the time of choice. Then the utility of that action in the actual world could be high on the non-centered view, but on the centered view that utility is negative, since the centered chance of the event in question given the action was one, given that it did actually take place. A precise definition of centered chances is not easy to frame, but the concept can be made intuitively clear. The resulting decision theory is, in my opinion, philosophically more satisfactory than the extant proposals, even though it doesn’t differ much in its practical recommendations, with the exception of some rather peculiar cases. (shrink)