This up-to-date introduction to decision theory offers comprehensive and accessible discussions of decision-making under ignorance and risk, the foundations of utility theory, the debate over subjective and objective probability, Bayesianism, causal decision theory, game theory, and social choice theory. No mathematical skills are assumed, and all concepts and results are explained in non-technical and intuitive as well as more formal ways. There are over 100 exercises with solutions, and a glossary of key terms and concepts. An emphasis on foundational aspects (...) of normative decision theory (rather than descriptive decision theory) makes the book particularly useful for philosophy students, but it will appeal to readers in a range of disciplines including economics, psychology, political science and computer science. • Has over 100 end of chapter review questions and exercises with solutions • Includes a chapter on how to draw a decision matrix • Explains the link between individual decision making, game theory and social choice theory Contents Preface; 1. Introduction; 2. The decision matrix; 3. Decisions under ignorance; 4. Decisions under risk; 5. Utility; 6. The mathematics of probability; 7. The philosophy of probability; 8. Why should we accept the preference axioms; 9. Causal vs. evidential decision theory; 10. Bayesian vs. non-Bayesian decision theory; 11. Game theory I: basic concepts and zero sum games; 12. Game theory II: nonzero sum and co-operative games; 13. Social choice theory; 14. Overview of descriptive decision theory; Appendix A. Glossary; Appendix B. Proof of the von Neumann-Morgenstern theorem; Further reading; Index. (shrink)
Consequentialism, one of the major theories of normative ethics, maintains that the moral rightness of an act is determined solely by the act's consequences and its alternatives. The traditional form of consequentialism is one-dimensional, in that the rightness of an act is a function of a single moral aspect, such as the sum total of wellbeing it produces. In this book Martin Peterson introduces a new type of consequentialist theory: multidimensional consequentialism. According to this theory, an act's moral rightness depends (...) on several separate dimensions, including individual wellbeing, equality and risk. Peterson's novel approach shows that moral views about equality and risk that were previously thought to be mutually incompatible can be rendered compatible, and his precise theoretical discussion helps the reader to understand better the distinction between consequentialist and non-consequentialist theories. His book will interest a wide range of readers in ethics. (shrink)
In this analytically oriented work, Peterson articulates and defends five moral principles for addressing ethical issues related to new and existing technologies: the cost-benefit principle, the precautionary principle, the sustainability principle, the autonomy principle, and the fairness principle.
According to the canonical formulation of the modal account of luck [e.g. Pritchard ], an event is lucky just when that event occurs in the actual world but not in a wide class of the nearest possible worlds where the relevant conditions for that event are the same as in the actual world. This paper argues, with reference to a novel variety of counterexample, that it is a mistake to focus, when assessing a given event for luckiness, on events distributed (...) over just the nearest possible worlds. More specifically, our objection to the canonical formulation of the modal account of luck reveals that whether an event is lucky depends crucially on events distributed over all possible worlds–viz., across the modal universe. It is shown that an amended modal account of luck which respects this point has the additional virtue of avoiding a notable kind of counterexample to modal accounts of luck proposed by Lackey. (shrink)
Armchair philosophers have questioned the significance of recent work in experimental philosophy by pointing out that experiments have been conducted on laypeople and undergraduate students. To challenge a practice that relies on expert intuitions, so the armchair objection goes, one needs to demonstrate that expert intuitions rather than those of ordinary people are sensitive to contingent facts such as cultural, linguistic, socio-economic, or educational background. This article does exactly that. Based on two empirical studies on populations of 573 and 203 (...) trained philosophers, respectively, it demonstrates that expert intuitions vary dramatically according to at least one contingent factor, namely, the linguistic background of the expert: philosophers make different intuitive judgments if their native language is English rather than Dutch, German, or Swedish. These findings cast doubt on the common armchair assumption that philosophical theories based on armchair intuitions are valid beyond the linguistic background against which they were developed. (shrink)
"This highly-readable work traces a set of beliefs about the nature of woman that have informed, and in turn have been reinforced by, science, religion, and philosophy from the classical period to the nineteenth century.... [T]his book’s analysis lends support to claims that the gender system affected our very conceptions of science." —Journal of the History of the Behavioral Sciences "An important book for the educated general public as well as for scholars in many disciplines. Highly recommended." —Library Journal "Students (...) and researchers alike will welcome this carefully argued volume that so clearly traces the dominant contours of Western conceptions about women." —Isis "Nancy Tuana’s book is brillant. In under two hundred pages she presents a concise account of how women have been perceived in relation to men in the Western world for the past 2,500 years."—American Historical Review "A wide-ranging discussion of conceptions of women in science, philosophy and religion from ancient times to the late nineteenth century, Tuana’s book makes it devastatingly clear how powerful and how deeply rooted was the Western idea of women as men’s inferiors."—Women’s Review of Books "... an unusually readable account of the image of women from the Greeks to the nineteenth century, wedded to a highly interesting argument about the way religion and philosophy affect the direction of the work of scientists, and how the work of scientists is used by philosophers and clergy to give authority to the more abstract world of ideas." —Magill Book Reviews Provides a framework for understanding the persistence of the Western patriarchal view of woman as inferior. Tuana examines beliefs that were accepted a priori as evidence of women’s inferiority and studies early theories of woman’s nature to illustrate the way scientific literature, was influenced by—and in turn affected—religious and philosophical tenets. (shrink)
What is the status of belief in God? Must a rational case be made or can such belief be properly basic? Is it possible to reconcile the concept of a good God with evil and suffering? In light of great differences among religions, can only one religion be true? The most comprehensive work of its kind, Reason and Religious Belief, now in its fourth edition, explores these and other perennial questions in the philosophy of religion. Drawing from the best in (...) both classical and contemporary discussions, the authors examine religious experience, faith and reason, the divine attributes, arguments for and against the existence of God, divine action (in various forms of theism), Reformed epistemology, religious language, religious diversity, religion and science, and much more. Retaining the engaging style and thorough coverage of previous editions, the fifth edition features revised treatments of omnipotence, miracles, and providence and updated suggestions for further reading. A sophisticated yet accessible introduction, Reason and Religious Belief, Third Edition, is ideally suited for use with the authors' companion anthology, Philosophy of Religion: Selected Readings, Fifth Edition (OUP, 2015). (shrink)
Based on a modern reading of Aristotle’s theory of friendship, we argue that virtual friendship does not qualify as genuine friendship. By ‘virtual friendship’ we mean the type of friendship that exists on the internet, and seldom or never is combined with real life interaction. A ‘traditional friendship’ is, in contrast, the type of friendship that involves substantial real life interaction, and we claim that only this type can merit the label ‘genuine friendship’ and thus qualify as morally valuable. The (...) upshot of our discussion is that virtual friendship is what Aristotle might have described as a lower and less valuable form of social exchange. (shrink)
ABSTRACTIn this article, we defend two claims about the precautionary principle. The first is that there is no ‘core’ precautionary principle that unifies all its different versions. It is more plausible to think of the different versions as being related to each other by way of family resemblances. So although precautionary principle x may have much in common with precautionary principle y, and y with z, there is no set of necessary and sufficient conditions that unify all versions of the (...) principle. Our second claim is that it is sometimes appropriate to think of the precautionary principle as a midlevel principle in the sense proposed by Beauchamp and Childress in their Principles of Biomedical Ethics, i.e. as a non-rigid moral principle. We argue that if the precautionary principle is conceived as a non-rigid principle that needs to be balanced against other principles before a moral verdict can be reached, then this enables us to address some standard objections to the principle. (shrink)
Pure time preference is a preference for something to come at one point in time rather than another merely because of when it occurs in time. In opposition to Sidgwick, Ramsey, Rawls, and Parfit we argue that it is not always irrational to be guided by pure time preferences. We argue that even if the mere difference of location in time is not a rational ground for a preference, time may nevertheless be a normatively neutral ground for a preference, and (...) this makes it plausible to claim that the preference is rationally permitted. (shrink)
We argue that some algorithms are value-laden, and that two or more persons who accept different value-judgments may have a rational reason to design such algorithms differently. We exemplify our claim by discussing a set of algorithms used in medical image analysis: In these algorithms it is often necessary to set certain thresholds for whether e.g. a cell should count as diseased or not, and the chosen threshold will partly depend on the software designer’s preference between avoiding false positives and (...) false negatives. This preference ultimately depends on a number of value-judgments. In the last section of the paper we discuss some general principles for dealing with ethical issues in algorithm-design. (shrink)
In this paper we discuss the hypothesis that, ‘moral agency is distributed over both humans and technological artefacts’, recently proposed by Peter-Paul Verbeek. We present some arguments for thinking that Verbeek is mistaken. We argue that artefacts such as bridges, word processors, or bombs can never be (part of) moral agents. After having discussed some possible responses, as well as a moderate view proposed by Illies and Meijers, we conclude that technological artefacts are neutral tools that are at most bearers (...) of instrumental value. (shrink)
This article questions the traditional view that moral rightness and wrongness are discrete predicates with sharp boundaries. I contend that moral rightness and wrongness come in degrees: Some acts are somewhat right and somewhat wrong. My argument is based on the assumption that meaning tracks use. If an overwhelming majority of competent language users frequently say that some acts are a bit right and a bit wrong, this indicates that rightness and wrongness are gradable concepts. To support the empirical part (...) of the argument I use the tools of experimental philosophy. Results from three surveys indicate that respondents use right and wrong as gradable terms to approximately the same extent as color terms, meaning that rightness and wrongness come in degrees roughly as much as colors do. In the largest study, only 4 percent persistently used right and wrong as non-gradable terms. (shrink)
Some philosophers believe that two objects of value can be ‘roughly equal’, or ‘on a par’, or belong to the same ‘clump’ of value in a sense that is fundamentally different from that in which some objects are ‘better than’, ‘worse than’, or ‘equally as good as’ others. This article shows that if two objects are on a par, or belong to the same clump, then an agent accepting a few plausible premises can be exploited in a money-pump. The central (...) premise of the argument is that value is choice-guiding. If one object is more valuable than another, then it is not permitted to choose the less valuable object; and if two objects are equally valuable it is permitted to choose either of them; and if two objects are on a par or belong to the same clump it is also permitted to choose either of them. (shrink)
We argue that non-epistemic values, including moral ones, play an important role in the construction and choice of models in science and engineering. Our main claim is that non-epistemic values are not only “secondary values” that become important just in case epistemic values leave some issues open. Our point is, on the contrary, that non-epistemic values are as important as epistemic ones when engineers seek to develop the best model of a process or problem. The upshot is that models are (...) neither value-free, nor depend exclusively on epistemic values or use non-epistemic values as tie-breakers. (shrink)
In this paper we discuss what we believe to be one of the most important features of near-future AIs, namely their capacity to behave in a friendly manner to humans. Our analysis of what it means for an AI to behave in a friendly manner does not presuppose that proper friendships between humans and AI systems could exist. That would require reciprocity, which is beyond the reach of near-future AI systems. Rather, we defend the claim that social AIs should be (...) programmed to behave in a manner that mimics a sufficient number of aspects of proper friendship. We call this “as-if friendship”. The main reason for why we believe that ‘as if friendship’ is an improvement on the current, highly submissive behavior displayed by AIs is the negative effects the latter can have on humans. We defend this view partly on virtue ethical grounds and we argue that the virtue-based approach to AI ethics outlined in this paper, which we call “virtue alignment”, is an improvement on the traditional “value alignment” approach. (shrink)
In this paper we present two distinctly epistemological puzzles that arise for one who aspires to defend some plausible version of the precautionary principle. The first puzzle involves an application of contextualism in epistemology; and the second puzzle concerns the task of defending a plausible version of the precautionary principle that would not be invalidated by de minimis.
This article discusses some aspects of animal ethics from an Aristotelian virtue ethics point of view. Because the notion of friendship is central to Aristotle’s ethical theory, the focus of the article is whether humans and animals can be friends. It is argued that new empirical findings in cognitive ethology indicate that animals actually do fulfill the Aristotelian condition for friendship based on mutual advantage. The practical ethical implications of these findings are discussed, and it is argued that eating meat (...) from free-living animals is more morally acceptable than eating cattle because hunters do not befriend their prey. (shrink)
To consequentialise a moral theory means to account for moral phenomena usually described in nonconsequentialist terms, such as rights, duties, and virtues, in a consequentialist framework. This paper seeks to show that all moral theories can be consequentialised. The paper distinguishes between different interpretations of the consequentialiser’s thesis, and emphasises the need for a cardinal ranking of acts. The paper also offers a new answer as to why consequentialising moral theories is important: This yields crucial methodological insights about how to (...) pursue ethical inquires. (shrink)
Cost-benefit analysis is commonly understood to be intimately connected with utilitarianism and incompatible with other moral theories, particularly those that focus on deontological concepts such as rights. We reject this claim and argue that cost-benefit analysis can take moral rights as well as other non-utilitarian moral considerations into account in a systematic manner. We discuss three ways of doing this, and claim that two of them (output filters and input filters) can account for a wide range of rights-based moral theories, (...) including the absolute notions of moral rights proposed by Hayek, Mayo, Nozick, and Shue. We also discuss whether the use of output filters and input filters can be generalized to cover other non-utilitarian theories, such as Kantian duty ethics and virtue ethics. (shrink)
Two interpretations of the precautionary principle are considered. According to the normative interpretation, the precautionary principle should be characterised in terms of what it urges doctors and other decision makers to do. According to the epistemic interpretation, the precautionary principle should be characterised in terms of what it urges us to believe. This paper recommends against the use of the precautionary principle as a decision rule in medical decision making, based on an impossibility theorem presented in Peterson . However, the (...) main point of the paper is an argument to the effect that decision theoretical problems associated with the precautionary principle can be overcome by paying greater attention to its epistemic dimension. Three epistemic principles inherent in a precautionary approach to medical risk analysis are characterised and defended. (shrink)
Hare proposes a view he calls prospectism for making choices in situations in which preferences have a common, but problematic structure. I show that prospectism permits the decision-maker to make a series of choices she knows in advance will lead to a sure loss. I also argue that a theory that permits the decision-maker to make choices she knows in advance will lead to a sure loss should be rejected.
We show that in infinite worlds the following three conditions are incompatible: The spatiotemporal ordering of individuals is morally irrelevant. All else being equal, the act of bringing about a good outcome with a high probability is better than the act of bringing about the same outcome with a low probability. One act is better than another only if there is a nonzero probability that it brings about a better outcome. The impossibility of combining these conditions shows that it is (...) more costly to endorse than has been previously acknowledged. (shrink)
It is a natural assumption in mainstream epistemological theory that ascriptions of knowledge of a proposition p track strength of epistemic position vis-à-vis p. It is equally natural to assume that the strength of one’s epistemic position is maximally high in cases where p concerns a simple analytic truth. For instance, it seems reasonable to suppose that one’s epistemic position vis-à-vis “a cat is a cat” is harder to improve than one’s position vis-à-vis “a cat is on the mat”, and (...) consequently, that the former is at least as unambiguous a case of knowledge as the latter. The current paper, however, presents empirical evidence which challenges this intuitive line of reasoning. Our study on the epistemic intuitions of hundreds of academic philosophers supports the idea that simple and uncontroversial analytic propositions are less likely to qualify as knowledge than empirical ones. We show that our results, though at odds with orthodox theories of knowledge in mainstream epistemology, can be explained in a way consistent with Wittgenstein’s remarks on ‘hinge propositions’ or with Stalnaker’s pragmatics of assertion. We then present and evaluate a number of lines of response mainstream theories of knowledge could appeal to in accommodating our results. Finally, we show how each line of response runs into some prima facie difficulties. Thus, our observed asymmetry between knowing “a cat is a cat” and knowing “a cat is on the mat” presents a puzzle which mainstream epistemology needs to resolve. (shrink)
In a recent paper in this journal, we proposed two novel puzzles associated with the precautionary principle. Both are puzzles that materialise, we argue, once we investigate the principle through an epistemological lens, and each constitutes a philosophical hurdle for any proponent of a plausible version of the precautionary principle. Steglich-Petersen claims, also in this journal, that he has resolved our puzzles. In this short note, we explain why we remain skeptical.
You must either save a group of m people or a group of n people. If there are no morally relevant diff erences among the people, which group should you save? is problem is known as the number problem. e recent discussion has focussed on three proposals: (i) Save the greatest number of people, (ii) Toss a fair coin, or (iii) Set up a weighted lottery, in which the probability of saving m people is m / m + n , (...) and the probability of saving n people is n / m + n . is contribution examines a fourth alternative, the mixed solution, according to which both fairness and the total number of people saved count. It is shown that the mixed solution can be defended without assuming the possibility of interpersonal comparisons of value. (shrink)
The Ethics of Technology: A Geometric Analysis of Five Moral Principles proposes five moral principles for analyzing ethical issues related to engineering and technology. The objections raised by several authors to the multidimensional scaling technique used in the book reveal a lack of familiarity with this widely used technique.
This article addresses Taruek’s much discussed Number Problem from a non-consequentialist point of view. I argue that some versions of the Number Problem have no solution, meaning that no alternative is at least as choice-worthy as the others, and that the best way to behave in light of such moral indeterminacy is to let chance make the decision. I contrast my proposal with F M Kamm ’s nonconsequentialist argument for saving the greatest number, the Argument for Best Outcomes, which I (...) argue does not follow from the premises it is based on. (shrink)
Stuart Russell defines the value alignment problem as follows: How can we build autonomous systems with values that “are aligned with those of the human race”? In this article I outline some distinctions that are useful for understanding the value alignment problem and then propose a solution: I argue that the methods currently applied by computer scientists for embedding moral values in autonomous systems can be improved by representing moral principles as conceptual spaces, i.e. as Voronoi tessellations of morally similar (...) choice situations located in a multidimensional geometric space. The advantage of my preferred geometric approach is that it can be implemented without specifying any utility function ex ante. (shrink)
This concise, well-structured survey examines the problem of evil in the context of the philosophy of religion. One of the core topics in that field, the problem of evil is an enduring challenge that Western philosophers have pondered for almost two thousand years. The main problem of evil consists in reconciling belief in a just and loving God with the evil and suffering in the world. Michael Peterson frames this issue by working through questions such as the following: What is (...) the relation of rational belief to religious faith? What different conceptual moves are possible on either side of the issue? What responses have important thinkers advanced and which seem most promising? Is it possible to maintain religious commitment in light of evil? Peterson relies on the helpful distinction between moral and natural evil to clarify our understanding of the different aspects of the problem as well as avenues for response.The overall format of the text rests on classifying various types of argument from evil: the logical, the probabilistic, the evidential, and the existential arguments. Each type of argument has its own strategy which both theists and nontheists must recognize and develop. Giving both theistic and nontheistic perspectives fair representation, the text works through the issues of whether evil shows theistic belief to be inconsistent, improbable, discredited by the evidence, or threatened by personal crisis.Peterson explains how defensive strategies are particularly geared for responding to the logical and probabilistic arguments from evil while theodicy is an appropriate response to the evidential argument. Theodicy has traditionally been understood as the attempt to justify belief in a God who is all-powerful and all-good in light of evil. The text discusses the theodicies of Augustine, Leibniz, Hick, and Whitehead as enlightening examples of theodicy. This discussion allows Peterson to identify and evaluate a rather dominant theme in most theodicies: that evil can be justified by designating a greater good. In the end, Peterson even explores how certain types of theodicy, based on specifically Christian renditions of theism, might provide a basis for addressing the existential problem of evil. The reader of this book gains not only an intellectual grasp of the debate over God and evil in professional philosophy but also the personal benefit of thinking through one of the most important issues in human life. (shrink)
Is it really necessary to add something like the Health Impact Fund to the existing global patent system? We can divide this question into two parts. First, is there something seriously wrong with the status quo and, if so, what exactly is it? Second, how do we best go about solving the problem; that is, how does the design of the reform proposal address the flaws in the status quo? Jorn Sonderholm, in his critique of the Health Impact Fund, or (...) HIF, raises both of these issues. These criticisms afford us the opportunity to reaffirm our commitment to ameliorating glaring problems with the current system for incentivizing R&D for essential medicines, and to clarify why we believe the HIF is exactly what the world needs. (shrink)
It is a natural assumption in mainstream epistemological theory that ascriptions of knowledge of a proposition p track strength of epistemic position vis-à-vis p. It is equally natural to assume that the strength of one’s epistemic position is maximally high in cases where p concerns a simple analytic truth. For instance, it seems reasonable to suppose that one’s epistemic position vis-à-vis “a cat is a cat” is harder to improve than one’s position vis-à-vis “a cat is on the mat”, and (...) consequently, that the former is at least as unambiguous a case of knowledge as the latter. The current paper, however, presents empirical evidence which challenges this intuitive line of reasoning. Our study on the epistemic intuitions of hundreds of academic philosophers supports the idea that simple and uncontroversial analytic propositions are less likely to qualify as knowledge than empirical ones. We show that our results, though at odds with orthodox theories of knowledge in mainstream epistemology, can be explained in a way consistent with Wittgenstein’s remarks on ‘hinge propositions’ or with Stalnaker’s pragmatics of assertion. We then present and evaluate a number of lines of response mainstream theories of knowledge could appeal to in accommodating our results. Finally, we show how each line of response runs into some prima facie difficulties. Thus, our observed asymmetry between knowing “a cat is a cat” and knowing “a cat is on the mat” presents a puzzle which mainstream epistemology needs to resolve. (shrink)
The debate over the civilian use of nuclear power is highly polarised. We argue that a reasonable response to this deep disagreement is to maintain that advocates of both camps should modify their positions. According to the analysis we propose, nuclear power is neither entirely right nor entirely wrong, but rather right and wrong to some degree. We are aware that this non-binary analysis of nuclear power is controversial from a theoretical point of view. Utilitarians, Kantians, and other moral theorists (...) make sharp, binary distinctions between right and wrong acts. However, an important argument speaking in favour of our non-binary analysis is that it better reflects our considered intuitions about the ethical trade-offs we face in discussions of nuclear power. The aim of this article is to make this argument sharp by explaining how it can be rendered compatible with, and supported by, the Capability Approach, which is quickly becoming one of the most influential frameworks for thinking about human development. (shrink)
This article discusses some ethical principles for distributing pandemic influenza vaccine and other indivisible goods. I argue that a number of principles for distributing pandemic influenza vaccine recently adopted by several national governments are morally unacceptable because they put too much emphasis on utilitarian considerations, such as the ability of the individual to contribute to society. Instead, it would be better to distribute vaccine by setting up a lottery. The argument for this view is based on a purely consequentialist account (...) of morality; i.e. an action is right if and only if its outcome is optimal. However, unlike utilitarians I do not believe that alternatives should be ranked strictly according to the amount of happiness or preference satisfaction they bring about. Even a mere chance to get some vaccine matters morally, even if it is never realized. (shrink)
In Australia and other countries, certain groups of women have traditionally been denied access to assisted reproductive technologies . These typically are single heterosexual women, lesbians, poor women, and those whose ability to rear children is questioned, particularly women with certain disabilities or who are older. The arguments used to justify selection of women for ARTs are most often based on issues such as scarcity of resources, and absence of infertility , or on social concerns: that it “goes against nature”; (...) particular women might not make good mothers; unconventional families are not socially acceptable; or that children of older mothers might be orphaned at an early age. The social, medical, legal, and ethical reasoning that has traditionally promoted this lack of equity in access to ARTs, and whether the criteria used for client deselection are ethically appropriate in any particular case, are explored by this review. In addition, the issues of distribution and just “gatekeeping” practices associated with these sensitive medical services are examined. (shrink)
This article seeks to contribute to the discussion on the nature of choice in virtue theory. If several different actions are available to the virtuous agent, they are also likely to vary in their degree of virtue, at least in some situations. Yet, it is widely agreed that once an action is recognised as virtuous there is no higher level of virtue. In this paper we discuss how the virtue theorist could accommodate both these seemingly conflicting ideas. We discuss this (...) issue from a modern Aristotelian perspective, as opposed to a purely exegetic one. We propose a way of resolving what seems to be a major clash between two central features of virtue ethics. Our proposal is based on the notion of parity, a concept which recently has received considerable attention in the literature on axiology. Briefly put, two alternatives are on a par (or are ‘roughly equal’) if they are comparable, although it is not the case that one is better than the other, nor that they are equally good. The advantages of applying the concept of parity to our problem are twofold. Firstly, it sheds new light on the account of choice in virtue theory. Secondly, some of the criticisms that have been mounted against the possibility of parity can be countered by considering the notion of choice from a virtue theory perspective. (shrink)
In this paper we shed new light on the Argument from Disagreement by putting it to test in a computer simulation. According to this argument widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by any moral facts, either because no such facts exist or because they are epistemically inaccessible or inefficacious for some other reason. Our simulation shows that if our moral opinions were influenced at least a little bit by moral facts, we (...) would quickly have reached consensus, even if our moral opinions were affected by factors such as false authorities, external political shifts, and random processes. Therefore, since no such consensus has been reached, the simulation gives us increased reason to take seriously the Argument from Disagreement. Our conclusion is however not conclusive; the simulation also indicates what assumptions one has to make in order to reject the Argument from Disagreement. The simulation algorithm we use builds on the work of Hegselmann and Krause (J Artif Soc Social Simul 5(3); 2002, J Artif Soc Social Simul 9(3), 2006). (shrink)
PurposeThe purpose of this paper is to argue that playing computer games for lengthy periods of time, even in a manner that will force the player to forgo certain other activities normally seen as more important, can be an integral part of human flourishing.Design/methodology/approachThe authors' claim is based on a modern reading of Aristotle's Nichomacean Ethics. It should be emphasized that the authors do not argue that computer gaming and other similar online activities are central to all people under all (...) circumstances; but only seek to show that the claim holds true for some people under some circumstances and the authors try to spell out the relevant circumstances in detail.FindingsThe authors provide a list of situations in which playing computer games for lengthy periods of time, in a manner that will force the player to forgo certain other activities normally seen as more important, is an integral part of human flourishing.Originality/valueThe paper puts some novel pressure on the widely‐held belief that playing computer games for lengthy periods of time, in a manner that will force the player to forgo certain other activities normally seen as more important. The paper claims that playing some computer games and partaking in some forms of online activities could be highly conducive to what it actually means in practice to take care of oneself and, to paraphrase Aristotle, to be eager for fine actions. (shrink)
The contention of this paper is that the current ethical debate over embryonic stem cell research is polarised to an extent that is not warranted by the underlying ethical conflict. It is argued that the ethical debate can be rendered more nuanced, and less polarised, by introducing non-binary notions of moral rightness and wrongness. According to the view proposed, embryonic stem cell research—and possibly other controversial activities too—can be considered ‘a little bit right and a little bit wrong’. If this (...) idea were to become widely accepted, the ethical debate would, for conceptual reasons, become less polarised. (shrink)
This excellent anthology in the philosophy of religion examines the basic classical and a host of contemporary issues in thirteen thematic sections. Assuming little or no familiarity with the religious concepts it addresses, it provides a well-balanced and accessible approach to the field. The articles cover the standard topics in the field, including religious experience, theistic arguments, the problem of evil, and miracles, as well as topics that have gained the attention of philosophers of religion in the last fifteen years, (...) such as reformed epistemology, the philosophical analysis of theological doctrine, and the kalam theological argument. The collection also includes topics often requested by instructors but seldom covered in competing texts, such as religion and science, religious pluralism, process theism, and religious ethics, offering greater flexibility in choosing exact topics for use in courses. The format of the book makes it an ideal teaching text, as each section begins with a brief introduction to the central topic or issue treated by the readings which follow. Each reading is preceded by a one paragraph summary, and a bibliography of suggested readings follow each section. Philosophy of Religion functions well as a stand-alone textbook for courses in the philosophy of religion, and is readily compatible for use as a primary source reader in conjunction with a secondary text. It is an ideal companion to Reason and Religious Belief, 2e (OUP, 1997). (shrink)