This up-to-date introduction to decision theory offers comprehensive and accessible discussions of decision-making under ignorance and risk, the foundations of utility theory, the debate over subjective and objective probability, Bayesianism, causal decision theory, game theory, and social choice theory. No mathematical skills are assumed, and all concepts and results are explained in non-technical and intuitive as well as more formal ways. There are over 100 exercises with solutions, and a glossary of key terms and concepts. An emphasis on foundational aspects (...) of normative decision theory (rather than descriptive decision theory) makes the book particularly useful for philosophy students, but it will appeal to readers in a range of disciplines including economics, psychology, political science and computer science. • Has over 100 end of chapter review questions and exercises with solutions • Includes a chapter on how to draw a decision matrix • Explains the link between individual decision making, game theory and social choice theory Contents Preface; 1. Introduction; 2. The decision matrix; 3. Decisions under ignorance; 4. Decisions under risk; 5. Utility; 6. The mathematics of probability; 7. The philosophy of probability; 8. Why should we accept the preference axioms; 9. Causal vs. evidential decision theory; 10. Bayesian vs. non-Bayesian decision theory; 11. Game theory I: basic concepts and zero sum games; 12. Game theory II: nonzero sum and co-operative games; 13. Social choice theory; 14. Overview of descriptive decision theory; Appendix A. Glossary; Appendix B. Proof of the von Neumann-Morgenstern theorem; Further reading; Index. (shrink)
Consequentialism, one of the major theories of normative ethics, maintains that the moral rightness of an act is determined solely by the act's consequences and its alternatives. The traditional form of consequentialism is one-dimensional, in that the rightness of an act is a function of a single moral aspect, such as the sum total of wellbeing it produces. In this book Martin Peterson introduces a new type of consequentialist theory: multidimensional consequentialism. According to this theory, an act's moral rightness depends (...) on several separate dimensions, including individual wellbeing, equality and risk. Peterson's novel approach shows that moral views about equality and risk that were previously thought to be mutually incompatible can be rendered compatible, and his precise theoretical discussion helps the reader to understand better the distinction between consequentialist and non-consequentialist theories. His book will interest a wide range of readers in ethics. (shrink)
In this analytically oriented work, Peterson articulates and defends five moral principles for addressing ethical issues related to new and existing technologies: the cost-benefit principle, the precautionary principle, the sustainability principle, the autonomy principle, and the fairness principle.
According to the canonical formulation of the modal account of luck [e.g. Pritchard ], an event is lucky just when that event occurs in the actual world but not in a wide class of the nearest possible worlds where the relevant conditions for that event are the same as in the actual world. This paper argues, with reference to a novel variety of counterexample, that it is a mistake to focus, when assessing a given event for luckiness, on events distributed (...) over just the nearest possible worlds. More specifically, our objection to the canonical formulation of the modal account of luck reveals that whether an event is lucky depends crucially on events distributed over all possible worlds–viz., across the modal universe. It is shown that an amended modal account of luck which respects this point has the additional virtue of avoiding a notable kind of counterexample to modal accounts of luck proposed by Lackey. (shrink)
Armchair philosophers have questioned the significance of recent work in experimental philosophy by pointing out that experiments have been conducted on laypeople and undergraduate students. To challenge a practice that relies on expert intuitions, so the armchair objection goes, one needs to demonstrate that expert intuitions rather than those of ordinary people are sensitive to contingent facts such as cultural, linguistic, socio-economic, or educational background. This article does exactly that. Based on two empirical studies on populations of 573 and 203 (...) trained philosophers, respectively, it demonstrates that expert intuitions vary dramatically according to at least one contingent factor, namely, the linguistic background of the expert: philosophers make different intuitive judgments if their native language is English rather than Dutch, German, or Swedish. These findings cast doubt on the common armchair assumption that philosophical theories based on armchair intuitions are valid beyond the linguistic background against which they were developed. (shrink)
Based on a modern reading of Aristotle’s theory of friendship, we argue that virtual friendship does not qualify as genuine friendship. By ‘virtual friendship’ we mean the type of friendship that exists on the internet, and seldom or never is combined with real life interaction. A ‘traditional friendship’ is, in contrast, the type of friendship that involves substantial real life interaction, and we claim that only this type can merit the label ‘genuine friendship’ and thus qualify as morally valuable. The (...) upshot of our discussion is that virtual friendship is what Aristotle might have described as a lower and less valuable form of social exchange. (shrink)
Pure time preference is a preference for something to come at one point in time rather than another merely because of when it occurs in time. In opposition to Sidgwick, Ramsey, Rawls, and Parfit we argue that it is not always irrational to be guided by pure time preferences. We argue that even if the mere difference of location in time is not a rational ground for a preference, time may nevertheless be a normatively neutral ground for a preference, and (...) this makes it plausible to claim that the preference is rationally permitted. (shrink)
In this article, we defend two claims about the precautionary principle. The first is that there is no ‘core’ precautionary principle that unifies all its different versions. It is more plausible to think of the different versions as being related to each other by way of family resemblances. So although precautionary principle x may have much in common with precautionary principle y, and y with z, there is no set of necessary and sufficient conditions that unify all versions of the (...) principle. Our second claim is that it is sometimes appropriate to think of the precautionary principle as a midlevel principle in the sense proposed by Beauchamp and Childress in their Principles of Biomedical Ethics, i.e. as a non-rigid moral principle. We argue that if the precautionary principle is conceived as a non-rigid principle that needs to be balanced against other principles before a moral verdict can be reached, then this enables us to address some standard objections to the principle. (shrink)
We argue that some algorithms are value-laden, and that two or more persons who accept different value-judgments may have a rational reason to design such algorithms differently. We exemplify our claim by discussing a set of algorithms used in medical image analysis: In these algorithms it is often necessary to set certain thresholds for whether e.g. a cell should count as diseased or not, and the chosen threshold will partly depend on the software designer’s preference between avoiding false positives and (...) false negatives. This preference ultimately depends on a number of value-judgments. In the last section of the paper we discuss some general principles for dealing with ethical issues in algorithm-design. (shrink)
In this paper we discuss the hypothesis that, ‘moral agency is distributed over both humans and technological artefacts’, recently proposed by Peter-Paul Verbeek. We present some arguments for thinking that Verbeek is mistaken. We argue that artefacts such as bridges, word processors, or bombs can never be (part of) moral agents. After having discussed some possible responses, as well as a moderate view proposed by Illies and Meijers, we conclude that technological artefacts are neutral tools that are at most bearers (...) of instrumental value. (shrink)
Some philosophers believe that two objects of value can be ‘roughly equal’, or ‘on a par’, or belong to the same ‘clump’ of value in a sense that is fundamentally different from that in which some objects are ‘better than’, ‘worse than’, or ‘equally as good as’ others. This article shows that if two objects are on a par, or belong to the same clump, then an agent accepting a few plausible premises can be exploited in a money-pump. The central (...) premise of the argument is that value is choice-guiding. If one object is more valuable than another, then it is not permitted to choose the less valuable object; and if two objects are equally valuable it is permitted to choose either of them; and if two objects are on a par or belong to the same clump it is also permitted to choose either of them. (shrink)
We argue that non-epistemic values, including moral ones, play an important role in the construction and choice of models in science and engineering. Our main claim is that non-epistemic values are not only “secondary values” that become important just in case epistemic values leave some issues open. Our point is, on the contrary, that non-epistemic values are as important as epistemic ones when engineers seek to develop the best model of a process or problem. The upshot is that models are (...) neither value-free, nor depend exclusively on epistemic values or use non-epistemic values as tie-breakers. (shrink)
This article questions the traditional view that moral rightness and wrongness are discrete predicates with sharp boundaries. I contend that moral rightness and wrongness come in degrees: Some acts are somewhat right and somewhat wrong. My argument is based on the assumption that meaning tracks use. If an overwhelming majority of competent language users frequently say that some acts are a bit right and a bit wrong, this indicates that rightness and wrongness are gradable concepts. To support the empirical part (...) of the argument I use the tools of experimental philosophy. Results from three surveys (n = 715, 578, and 182) indicate that respondents use right and wrong as gradable terms to approximately the same extent as color terms, meaning that rightness and wrongness come in degrees roughly as much as colors do. In the largest study, only 4 percent persistently used right and wrong as non-gradable terms. (shrink)
In this paper we present two distinctly epistemological puzzles that arise for one who aspires to defend some plausible version of the precautionary principle. The first puzzle involves an application of contextualism in epistemology; and the second puzzle concerns the task of defending a plausible version of the precautionary principle that would not be invalidated by de minimis.
To consequentialise a moral theory means to account for moral phenomena usually described in nonconsequentialist terms, such as rights, duties, and virtues, in a consequentialist framework. This paper seeks to show that all moral theories can be consequentialised. The paper distinguishes between different interpretations of the consequentialiser’s thesis, and emphasises the need for a cardinal ranking of acts. The paper also offers a new answer as to why consequentialising moral theories is important: This yields crucial methodological insights about how to (...) pursue ethical inquires. (shrink)
This article discusses some aspects of animal ethics from an Aristotelian virtue ethics point of view. Because the notion of friendship is central to Aristotle’s ethical theory, the focus of the article is whether humans and animals can be friends. It is argued that new empirical findings in cognitive ethology indicate that animals actually do fulfill the Aristotelian condition for friendship based on mutual advantage. The practical ethical implications of these findings are discussed, and it is argued that eating meat (...) from free-living animals is more morally acceptable than eating cattle because hunters do not befriend their prey. (shrink)
In this paper we discuss what we believe to be one of the most important features of near-future AIs, namely their capacity to behave in a friendly manner to humans. Our analysis of what it means for an AI to behave in a friendly manner does not presuppose that proper friendships between humans and AI systems could exist. That would require reciprocity, which is beyond the reach of near-future AI systems. Rather, we defend the claim that social AIs should be (...) programmed to behave in a manner that mimics a sufficient number of aspects of proper friendship. We call this “as-if friendship”. The main reason for why we believe that ‘as if friendship’ is an improvement on the current, highly submissive behavior displayed by AIs is the negative effects the latter can have on humans. We defend this view partly on virtue ethical grounds and we argue that the virtue-based approach to AI ethics outlined in this paper, which we call “virtue alignment”, is an improvement on the traditional “value alignment” approach. (shrink)
Hare proposes a view he calls prospectism for making choices in situations in which preferences have a common, but problematic structure. I show that prospectism permits the decision-maker to make a series of choices she knows in advance will lead to a sure loss. I also argue that a theory that permits the decision-maker to make choices she knows in advance will lead to a sure loss should be rejected.
Cost-benefit analysis is commonly understood to be intimately connected with utilitarianism and incompatible with other moral theories, particularly those that focus on deontological concepts such as rights. We reject this claim and argue that cost-benefit analysis can take moral rights as well as other non-utilitarian moral considerations into account in a systematic manner. We discuss three ways of doing this, and claim that two of them (output filters and input filters) can account for a wide range of rights-based moral theories, (...) including the absolute notions of moral rights proposed by Hayek, Mayo, Nozick, and Shue. We also discuss whether the use of output filters and input filters can be generalized to cover other non-utilitarian theories, such as Kantian duty ethics and virtue ethics. (shrink)
You must either save a group of m people or a group of n people. If there are no morally relevant diff erences among the people, which group should you save? is problem is known as the number problem. e recent discussion has focussed on three proposals: (i) Save the greatest number of people, (ii) Toss a fair coin, or (iii) Set up a weighted lottery, in which the probability of saving m people is m / m + n , (...) and the probability of saving n people is n / m + n . is contribution examines a fourth alternative, the mixed solution, according to which both fairness and the total number of people saved count. It is shown that the mixed solution can be defended without assuming the possibility of interpersonal comparisons of value. (shrink)
It is a natural assumption in mainstream epistemological theory that ascriptions of knowledge of a proposition p track strength of epistemic position vis-à-vis p. It is equally natural to assume that the strength of one’s epistemic position is maximally high in cases where p concerns a simple analytic truth. For instance, it seems reasonable to suppose that one’s epistemic position vis-à-vis “a cat is a cat” is harder to improve than one’s position vis-à-vis “a cat is on the mat”, and (...) consequently, that the former is at least as unambiguous a case of knowledge as the latter. The current paper, however, presents empirical evidence which challenges this intuitive line of reasoning. Our study on the epistemic intuitions of hundreds of academic philosophers supports the idea that simple and uncontroversial analytic propositions are less likely to qualify as knowledge than empirical ones. We show that our results, though at odds with orthodox theories of knowledge in mainstream epistemology, can be explained in a way consistent with Wittgenstein’s remarks on ‘hinge propositions’ or with Stalnaker’s pragmatics of assertion. We then present and evaluate a number of lines of response mainstream theories of knowledge could appeal to in accommodating our results. Finally, we show how each line of response runs into some prima facie difficulties. Thus, our observed asymmetry between knowing “a cat is a cat” and knowing “a cat is on the mat” presents a puzzle which mainstream epistemology needs to resolve. (shrink)
This article seeks to contribute to the discussion on the nature of choice in virtue theory. If several different actions are available to the virtuous agent, they are also likely to vary in their degree of virtue, at least in some situations. Yet, it is widely agreed that once an action is recognised as virtuous there is no higher level of virtue. In this paper we discuss how the virtue theorist could accommodate both these seemingly conflicting ideas. We discuss this (...) issue from a modern Aristotelian perspective, as opposed to a purely exegetic one. We propose a way of resolving what seems to be a major clash between two central features of virtue ethics. Our proposal is based on the notion of parity, a concept which recently has received considerable attention in the literature on axiology. Briefly put, two alternatives are on a par (or are ‘roughly equal’) if they are comparable, although it is not the case that one is better than the other, nor that they are equally good. The advantages of applying the concept of parity to our problem are twofold. Firstly, it sheds new light on the account of choice in virtue theory. Secondly, some of the criticisms that have been mounted against the possibility of parity can be countered by considering the notion of choice from a virtue theory perspective. (shrink)
In a recent paper in this journal, we proposed two novel puzzles associated with the precautionary principle. Both are puzzles that materialise, we argue, once we investigate the principle through an epistemological lens, and each constitutes a philosophical hurdle for any proponent of a plausible version of the precautionary principle. Steglich-Petersen claims, also in this journal, that he has resolved our puzzles. In this short note, we explain why we remain skeptical.
This article addresses Taruek’s much discussed Number Problem from a non-consequentialist point of view. I argue that some versions of the Number Problem have no solution, meaning that no alternative is at least as choice-worthy as the others, and that the best way to behave in light of such moral indeterminacy is to let chance make the decision. I contrast my proposal with F M Kamm ’s nonconsequentialist argument for saving the greatest number, the Argument for Best Outcomes, which I (...) argue does not follow from the premises it is based on. (shrink)
We show that in infinite worlds the following three conditions are incompatible: The spatiotemporal ordering of individuals is morally irrelevant. All else being equal, the act of bringing about a good outcome with a high probability is better than the act of bringing about the same outcome with a low probability. One act is better than another only if there is a nonzero probability that it brings about a better outcome. The impossibility of combining these conditions shows that it is (...) more costly to endorse than has been previously acknowledged. (shrink)
The Ethics of Technology: A Geometric Analysis of Five Moral Principles proposes five moral principles for analyzing ethical issues related to engineering and technology. The objections raised by several authors to the multidimensional scaling technique used in the book reveal a lack of familiarity with this widely used technique.
The debate over the civilian use of nuclear power is highly polarised. We argue that a reasonable response to this deep disagreement is to maintain that advocates of both camps should modify their positions. According to the analysis we propose, nuclear power is neither entirely right nor entirely wrong, but rather right and wrong to some degree. We are aware that this non-binary analysis of nuclear power is controversial from a theoretical point of view. Utilitarians, Kantians, and other moral theorists (...) make sharp, binary distinctions between right and wrong acts. However, an important argument speaking in favour of our non-binary analysis is that it better reflects our considered intuitions about the ethical trade-offs we face in discussions of nuclear power. The aim of this article is to make this argument sharp by explaining how it can be rendered compatible with, and supported by, the Capability Approach, which is quickly becoming one of the most influential frameworks for thinking about human development. (shrink)
It is a natural assumption in mainstream epistemological theory that ascriptions of knowledge of a proposition p track strength of epistemic position vis-à-vis p. It is equally natural to assume that the strength of one’s epistemic position is maximally high in cases where p concerns a simple analytic truth. For instance, it seems reasonable to suppose that one’s epistemic position vis-à-vis “a cat is a cat” is harder to improve than one’s position vis-à-vis “a cat is on the mat”, and (...) consequently, that the former is at least as unambiguous a case of knowledge as the latter. The current paper, however, presents empirical evidence which challenges this intuitive line of reasoning. Our study on the epistemic intuitions of hundreds of academic philosophers supports the idea that simple and uncontroversial analytic propositions are less likely to qualify as knowledge than empirical ones. We show that our results, though at odds with orthodox theories of knowledge in mainstream epistemology, can be explained in a way consistent with Wittgenstein’s remarks on ‘hinge propositions’ or with Stalnaker’s pragmatics of assertion. We then present and evaluate a number of lines of response mainstream theories of knowledge could appeal to in accommodating our results. Finally, we show how each line of response runs into some prima facie difficulties. Thus, our observed asymmetry between knowing “a cat is a cat” and knowing “a cat is on the mat” presents a puzzle which mainstream epistemology needs to resolve. (shrink)
This article discusses some ethical principles for distributing pandemic influenza vaccine and other indivisible goods. I argue that a number of principles for distributing pandemic influenza vaccine recently adopted by several national governments are morally unacceptable because they put too much emphasis on utilitarian considerations, such as the ability of the individual to contribute to society. Instead, it would be better to distribute vaccine by setting up a lottery. The argument for this view is based on a purely consequentialist account (...) of morality; i.e. an action is right if and only if its outcome is optimal. However, unlike utilitarians I do not believe that alternatives should be ranked strictly according to the amount of happiness or preference satisfaction they bring about. Even a mere chance to get some vaccine matters morally, even if it is never realized. (shrink)
Stuart Russell defines the value alignment problem as follows: How can we build autonomous systems with values that “are aligned with those of the human race”? In this article I outline some distinctions that are useful for understanding the value alignment problem and then propose a solution: I argue that the methods currently applied by computer scientists for embedding moral values in autonomous systems can be improved by representing moral principles as conceptual spaces, i.e. as Voronoi tessellations of morally similar (...) choice situations located in a multidimensional geometric space. The advantage of my preferred geometric approach is that it can be implemented without specifying any utility function ex ante. (shrink)
In this paper we shed new light on the Argument from Disagreement by putting it to test in a computer simulation. According to this argument widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by any moral facts, either because no such facts exist or because they are epistemically inaccessible or inefficacious for some other reason. Our simulation shows that if our moral opinions were influenced at least a little bit by moral facts, we (...) would quickly have reached consensus, even if our moral opinions were affected by factors such as false authorities, external political shifts, and random processes. Therefore, since no such consensus has been reached, the simulation gives us increased reason to take seriously the Argument from Disagreement. Our conclusion is however not conclusive; the simulation also indicates what assumptions one has to make in order to reject the Argument from Disagreement. The simulation algorithm we use builds on the work of Hegselmann and Krause (J Artif Soc Social Simul 5(3); 2002, J Artif Soc Social Simul 9(3), 2006). (shrink)
PurposeThe purpose of this paper is to argue that playing computer games for lengthy periods of time, even in a manner that will force the player to forgo certain other activities normally seen as more important, can be an integral part of human flourishing.Design/methodology/approachThe authors' claim is based on a modern reading of Aristotle's Nichomacean Ethics. It should be emphasized that the authors do not argue that computer gaming and other similar online activities are central to all people under all (...) circumstances; but only seek to show that the claim holds true for some people under some circumstances and the authors try to spell out the relevant circumstances in detail.FindingsThe authors provide a list of situations in which playing computer games for lengthy periods of time, in a manner that will force the player to forgo certain other activities normally seen as more important, is an integral part of human flourishing.Originality/valueThe paper puts some novel pressure on the widely‐held belief that playing computer games for lengthy periods of time, in a manner that will force the player to forgo certain other activities normally seen as more important. The paper claims that playing some computer games and partaking in some forms of online activities could be highly conducive to what it actually means in practice to take care of oneself and, to paraphrase Aristotle, to be eager for fine actions. (shrink)
The contention of this paper is that the current ethical debate over embryonic stem cell research is polarised to an extent that is not warranted by the underlying ethical conflict. It is argued that the ethical debate can be rendered more nuanced, and less polarised, by introducing non-binary notions of moral rightness and wrongness. According to the view proposed, embryonic stem cell research—and possibly other controversial activities too—can be considered ‘a little bit right and a little bit wrong’. If this (...) idea were to become widely accepted, the ethical debate would, for conceptual reasons, become less polarised. (shrink)
This article argues that, contrary to the received view, prioritarianism and egalitarianism are not jointly incompatible theories in normative ethics. By introducing a distinction between weighing and aggregating, the authors show that the seemingly conflicting intuitions underlying prioritarianism and egalitarianism are consistent. The upshot is a combined position, equality-prioritarianism, which takes both prioritarian and egalitarian considerations into account in a technically precise manner. On this view, the moral value of a distribution of well-being is a product of two factors: the (...) sum of all individuals' priority-adjusted well-being, and a measure of the equality of the distribution in question. Some implications of equality-prioritarianism are considered. (shrink)
We discuss ethical aspects of risk-taking with special focus on principlism and mid-level moral principles. A new distinction between the strength of an obligation and the degree to which it is valid is proposed. We then use this distinction for arguing that, in cases where mid-level moral principles come into conflict, the moral status of the act under consideration may be indeterminate, in a sense rendered precise in the paper. We apply this thought to issues related to pandemic influenza vaccines. (...) The main conclusion of the paper is that on a principlist approach some acts may be neither right nor wrong (or neither permissible nor impermissible), and we claim that this has important implications for how we ought to make decisions under risk. (shrink)
It is commonly assumed that preferences are determinate; that is, that an agent who has a preference knows that she has the preference in question and is disposed to act upon it. This paper argues the dubiousness of that assumption. An account of indeterminate preferences in terms of self-predicting subjective probabilities is given, and a decision rule for choices involving indeterminate preferences is proposed. Wolfgang Spohn’s and Isaac Levi ’s arguments against self-predicting probabilities are also considered, in light of Wlodek (...) Rabinowicz’s recent criticism. (shrink)
It is widely believed that consequentialists are committed to the claim that persons are mere containers for well-being. In this article I challenge this view by proposing a new version of consequentialism, according to which the identities of persons matter. The new theory, two-dimensional prioritarianism, is a natural extension of traditional prioritarianism. Two-dimensional prioritarianism holds that wellbeing matters more for persons who are at a low absolute level than for persons who are at a higher level and that it is (...) worse to be deprived of a given number of units than it is good to gain the same number of units, even if the new distribution is a permutation of the original one. If a fixed amount of well-being is transferred from one person to another and then transferred back again, two-dimensional prioritarianism implies that it would have been better to preserve the status quo. (shrink)
Van de Poel argues that nuclear power should be treated as an ongoing social experiment that needs to be continuously monitored and evaluated. In his reports (2009; Jacobs, Van de Poel, & Os...
In this article I respond to comments and objections raised in the special issue on my book The Dimensions of Consequentialism. I defend my multi-dimensional consequentialist theory against a range of challenges articulated by Thomas Schmidt, Campbell Brown, Frances Howard-Snyder, Roger Crisp, Vuko Andric and Attila Tanyi, and Jan Gertken. My aim is to show that multi-dimensional consequentialism is, at least, a coherent and intuitively plausible alternative to one-dimensional theories such as utilitarianism, prioritarianism, and mainstream accounts of egalitarianism. I am (...) very grateful to all contributors for reading my book so closely and for devoting time and intellectual energy to thinking about the pros and cons of multi-dimensional consequentialism. (shrink)
In this paper I respond to van de Poel’s claim that new technologies should be conceived as ongoing social experiments, which is an idea originally introduced by Schinzinger and Martin in the 1970s. I discuss and criticize three possible motivations for thinking of new technologies as ongoing social experiments.
Pragmatic arguments seek to demonstrate that you can be placed in a situation in which you will face a sure and foreseeable loss if you do not behave in accordance with some principle P. In this article I show that for every P entailed by the principle of maximizing expected utility you will not be better off from a pragmatic point of view if you accept P than if you don’t, because even if you obey the axioms of expected utility (...) theory it is possible to place you in a situation in which you will face a certain and foreseeable loss. This shows that for a large class of Ps, there is no pragmatic difference between people who accept P and those who don’t. (shrink)
Can humans be friends with animals? If so, what would the moral implications of such friendship be? In a previous issue of this journal, we argued that humans can indeed be friends with animals and that such friendships are morally valuable. The present article is a comment on Mark Rowlands’s reply to our original article. We argue that our original argument is not undermined by Rowlands’s attack.