This book argues that we need to explore how human beings can best coordinate and collaborate with robots in responsible ways. It investigates ethically important differences between human agency and robot agency to work towards an ethics of responsible human-robot interaction.
Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways (...) in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; moral and legal responsibility; and decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars. (shrink)
Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...) makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think. (shrink)
The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In this article, we build upon this work to provide a detailed ethical analysis (...) of the Quantified Relationship. We identify eight core objections to the QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness toward this technology and advocate the development of a research agenda for the positive use of QR technologies. (shrink)
In this paper, we discuss the ethics of automated driving. More specifically, we discuss responsible human-robot coordination within mixed traffic: i.e. traffic involving both automated cars and conventional human-driven cars. We do three main things. First, we explain key differences in robotic and human agency and expectation-forming mechanisms that are likely to give rise to compatibility-problems in mixed traffic, which may lead to crashes and accidents. Second, we identify three possible solution-strategies for achieving better human-robot coordination within mixed traffic. Third, (...) we identify important ethical challenges raised by each of these three possible strategies for achieving optimized human-robot cordination in this domain. Among other things, we argue that we should not just explore ways of making robotic driving more like human driving. Rather, we ought also to take seriously potential ways of making human driving more like robotic driving. Nor should we assume that complete automation is always the ideal to aim for; in some traffic-situations, the best results may be achieved through human-robot collaboration. Ultimately, our main aim in this paper is to argue that the new field of the ethics of automated driving needs take seriously the ethics of mixed traffic and responsible human-robot coordination. (shrink)
One of the topics that often comes up in ethical discussions of deep brain stimulation (DBS) is the question of what impact DBS has, or might have, on the patient’s self. This is often understood as a question of whether DBS poses a “threat” to personal identity, which is typically understood as having to do with psychological and/or narrative continuity over time. In this article, we argue that the discussion of whether DBS is a “threat” to continuity over time is (...) too narrow. There are other questions concerning DBS and the self that are overlooked in discussions exclusively focusing on psychological and/or narrative continuity. For example, it is also important to investigate whether DBS might sometimes have a positive (e.g. a rehabilitating) effect on the patient’s self. To widen the discussion of DBS, so as to make it encompass a broader range of considerations that bear on DBS’s impact on the self, we identify six features of the commonly used concept of a person’s “true self”. We apply these six features to the relation between DBS and the self. And we end with a brief discussion of the role DBS might play in treating otherwise treatment-refractory anorexia nervosa. This further highlights the importance of discussing both continuity over time and the notion of the true self. (shrink)
The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three (...) questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots. (shrink)
It is widely recognized that lives and activities can be meaningful or meaningless, but few have appreciated that they can also be anti-meaningful. Anti-meaning is the polar opposite of meaning. Our purpose in this essay is to examine the nature and importance of this new and unfamiliar topic. In the first part, we sketch four theories of anti-meaning that correspond to leading theories of meaning. In the second part, we argue that anti-meaning has significance not only for our attempts to (...) theorize about meaning in life, but also for our ability to lead meaningful lives in the modern world. (shrink)
There has been a long history of arguments over whether happiness is anything more than a particular set of psychological states. On one side, some philosophers have argued that there is not, endorsing a descriptive view of happiness. Affective scientists have also embraced this view and are reaching a near consensus on a definition of happiness as some combination of affect and life-satisfaction. On the other side, some philosophers have maintained an evaluative view of happiness, on which being happy involves (...) living a life that is normatively good. Within the context of this debate we consider how people ordinarily understand happiness, and provide evidence that the ordinary understanding of happiness reflects aspects of both evaluative and descriptive views. Similar to evaluative views, normative judgments have a substantive role in the ordinary understanding of happiness. Yet, similar to descriptive views, the ordinary understanding is focused on the person’s psychological states and not the overall life they actually lived. Combining these two aspects, we argue that the ordinary understanding of happiness suggests a novel view on which happiness consists in experiencing positive psychological states when one ought to. This view, if right, has implications for both philosophical and psychological research on happiness. (shrink)
The concept of meaningful work has recently received increased attention in philosophy and other disciplines. However, the impact of the increasing robotization of the workplace on meaningful work has received very little attention so far. Doing work that is meaningful leads to higher job satisfaction and increased worker well-being, and some argue for a right to access to meaningful work. In this paper, we therefore address the impact of robotization on meaningful work. We do so by identifying five key aspects (...) of meaningful work: pursuing a purpose, social relationships, exercising skills and self-development, self-esteem and recognition, and autonomy. For each aspect, we analyze how the introduction of robots into the workplace may diminish or enhance the meaningfulness of work. We also identify a few ethical issues that emerge from our analysis. We conclude that robotization of the workplace can have both significant negative and positive effects on meaningful work. Our findings about ways in which robotization of the workplace can be a threat or opportunity for meaningful work can serve as the basis for ethical arguments for how to—and how not to—implement robots into workplaces. (shrink)
Some critics of sex-robots worry that their use might spread objectifying attitudes about sex, and common sense places a higher value on sex within love-relationships than on casual sex. If there could be mutual love between humans and sex-robots, this could help to ease the worries about objectifying attitudes. And mutual love between humans and sex-robots, if possible, could also help to make this sex more valuable. But is mutual love between humans and robots possible, or even conceivable? We discuss (...) three clusters of ideas and associations commonly discussed within the philosophy of love, and relate these to the topic of whether mutual love could be achieved between humans and sex-robots: (i) the idea of love as a “good match”; (ii) the idea of valuing each other in our distinctive particularity; and (iii) the idea of a steadfast commitment. We consider relations among these ideas and the sort of agency and free will that we attribute to human romantic partners. Our conclusion is that mutual love between humans and advanced sex-robots is not an altogether impossible proposition. However, it is unlikely that we will be able to create robots sophisticated enough to be able to participate in love-relationships anytime soon. -/- . (shrink)
Rapid advances in AI-based automation have led to a number of existential and economic concerns. In particular, as automating technologies develop enhanced competency they seem to threaten the values associated with meaningful work. In this article, we focus on one such value: the value of achievement. We argue that achievement is a key part of what makes work meaningful and that advances in AI and automation give rise to a number achievement gaps in the workplace. This could limit people’s ability (...) to participate in meaningful forms of work. Achievement gaps are interesting, in part, because they are the inverse of the (negative) responsibility gaps already widely discussed in the literature on AI ethics. Having described and explained the problem of achievement gaps, the article concludes by identifying four possible policy responses to the problem. (shrink)
The growth of self-tracking and personal surveillance has given rise to the Quantified Self movement. Members of this movement seek to enhance their personal well-being, productivity, and self-actualization through the tracking and gamification of personal data. The technologies that make this possible can also track and gamify aspects of our interpersonal, romantic relationships. Several authors have begun to challenge the ethical and normative implications of this development. In this article, we build upon this work to provide a detailed ethical analysis (...) of the Quantified Relationship. We identify eight core objections to the QR and subject them to critical scrutiny. We argue that although critics raise legitimate concerns, there are ways in which tracking technologies can be used to support and facilitate good relationships. We thus adopt a stance of cautious openness toward this technology and advocate the development of a research agenda for the positive use of QR technologies. (shrink)
The so-called Disability Paradox arises from the apparent tension between the popular view that disability leads to low well-being and the relatively high life-satisfaction reports of disabled people. Our aim in this essay is to make some progress toward dissolving this alleged paradox by exploring the relationship between disability and various “goods of life”—that is, components of a life that typically make a person’s life go better for her. We focus on four widely recognized goods of life (happiness, rewarding relationships, (...) knowledge, achievement) and four common types of disability (sensory, mobility, intellectual, and social) and systematically examine the extent to which the four disability types are in principle compatible with obtaining the four goods of life. Our findings suggest that that there is a high degree of compatibility. This undermines the widespread view that disabilities, by their very nature, substantially limit a person’s ability to access the goods of life, and it provides some guidance on how to dissolve the Disability Paradox. (shrink)
In this paper, we engage in dialogue with Jonathan Pugh, Hannah Maslen, and Julian Savulescu about how to best interpret the potential impacts of deep brain stimulation on the self. We consider whether ordinary people’s convictions about the true self should be interpreted in essentialist or existentialist ways. Like Pugh et al., we argue that it is useful to understand the notion of the true self as having both essentialist and existentialist components. We also consider two ideas from existentialist philosophy (...) – Jean-Paul Sartre and Simone de Beauvoir’s ideas about “bad faith” and “ambiguity” – to argue that there can be value to patients in regarding themselves as having a certain amount of freedom to choose what aspects of themselves should be considered representative of their true selves. Lastly, we consider the case of an anorexia nervosa-patient who shifts between conflicting mind-sets. We argue that mind-sets in which it is easier for the patient and his or her family to share values can plausibly be considered to be more representative of the patient’s true self, if this promotes a well-functioning relationship between the patient and the family. However, we also argue that families are well-advised to give patients room to figure out what such shared values mean to them, since it can be alienating for patients if they feel that others try to impose values on them from the outside. (shrink)
The absence of meaningfulness in life is meaninglessness. But what is the polar opposite of meaningfulness? In recent and ongoing work together with Stephen Campbell and Marcello di Paola respectively, I have explored what we dub ‘anti-meaning’: the negative counterpart of positive meaning in life. Here, I relate this idea of ‘anti-meaningful’ actions, activities, and projects to the topic of death, and in particular the deaths or suffering of those who will live after our own deaths. Connecting this idea of (...) anti-meaning and what happens after our own deaths to recent work by Samuel Scheffler on what he calls ‘the collective afterlife’ and his four reasons to care about future generations, I argue that if we today make choices or have lifestyles that later lead to unnecessarily early deaths and otherwise avoidable suffering of people who will live after we have died, this robs our current choices and lifestyles of some of their meaning, perhaps even making them the opposite of meaningful in the long run. (shrink)
Would a “medicalization” of love be a “good” or “bad” form of medicalization? In discussing this question, Earp, Sandberg, and Savulescu primarily focus on the potential positive and negative consequences of turning love into a medical issue. But it can also be asked whether there is something intrinsically regrettable about medicalizing love. It is argued here that the medicalization of love can be seen as an “evaluative category mistake”: it treats a core human value as if it were mainly a (...) means to other ends . It is also argued that Earp et al’s closing argument can be seen as involving another evaluative category mistake: it treats an object of desire and practical interest as if it mainly were an object of scientific contemplation and theoretical interest. It is concluded that, to relate love to health and well-being in a more satisfying way, we should construe the latter two in broader ways, whereby love is itself a component or element of human flourishing. (shrink)
This paper discusses the robotization of the workplace, and particularly the question of whether robots can be good colleagues. This might appear to be a strange question at first glance, but it is worth asking for two reasons. Firstly, some people already treat robots they work alongside as if the robots are valuable colleagues. It is worth reflecting on whether such people are making a mistake. Secondly, having good colleagues is widely regarded as a key aspect of what can make (...) work meaningful. In discussing whether robots can be good colleagues, the paper compares that question to the more widely discussed questions of whether robots can be our friends or romantic partners. The paper argues that the ideal of being a good colleague has many different parts, and that on a behavioral level, robots can live up to many of the criteria typically associated with being a good colleague. Moreover, the paper also argues that in comparison with the more demanding ideals of being a good friend or a good romantic partner, it is comparatively easier for a robot to live up to the ideal of being a good colleague. The reason for this is that the “inner lives” of our friends and lovers are more important to us than the inner lives of our colleagues. (shrink)
Ethics of Artificial Intelligence This article provides a comprehensive overview of the main ethical issues related to the impact of Artificial Intelligence on human society. AI is the use of machines to do things that would normally require human intelligence. In many areas of human life, AI has rapidly and significantly affected human society … Continue reading Ethics of Artificial Intelligence →.
Drawing on insights from robotics, psychology, and human-computer interaction, developers of sex robots are currently aiming to create emotional bonds of attachment and even love between human users and their products. This is done by creating robots that can exhibit a range of facial expressions, that are made with human-like artificial skin, and that possess a rich vocabulary with many conversational possibilities. In light of the human tendency to anthropomorphize artefacts, we can expect that designers will have some success and (...) that this will lead to the attribution of mental states to the robot that the robot does not actually have, as well as the inducement of significant emotional responses in the user. This raises the question of whether it might be ethically problematic to try to develop robots that appear to love their users. We discuss three possible ethical concerns about this aim: first, that designers may be taking advantage of users’ emotional vulnerability; second, that users may be deceived; and, third, that relationships with robots may block off the possibility of more meaningful relationships with other humans. We argue that developers should attend to the ethical constraints suggested by these concerns in their development of increasingly humanoid sex robots. We discuss two different ways in which they might do so. (shrink)
Kantians are increasingly deserting the universal law formula in favor of the humanity formula. The former, they argue, is open to various decisive objections; the two are not equivalent; and it is only by appealing to the humanity formula that Kant can reliably generate substantive implications from his theory of an acceptable sort. These assessments of the universal law formula, which clash starkly with Kant's own assessment of it, are based on various widely accepted interpretative assumptions. These assumptions, it is (...) argued in this article, depend on misleading translations of key terms; selective attention to Kant's concrete examples; not taking seriously Kant's theoretical claims about the relations among his various ideas; and a failure to take into account Kant's idiosyncratic definitions of key concepts. The article seeks to right these interpretative wrongs, and finds that the universal law formula is not open to many of the standard objections. (shrink)
Writers like Christine Korsgaard and Allen Wood understand Kant's idea of rational nature as an end in itself as a commitment to a substantive value. This makes it hard for them to explain the supposed equivalence between the universal law and humanity formulations of the categorical imperative, since the former does not appear to assert any substantive value. Nor is it easy for defenders of value-based readings to explain Kant's claim that the law-giving nature of practical reason makes all beings (...) with practical reason regard the idea of a rational nature as an end in itself. This article seeks to replace these value-based readings with a reading of the idea of rational nature as an end that fits better with the overall argument of the Groundwork. (shrink)
Stephen Campbell, Connie Ulrich, and Christine Grady argue that we need to a broader understanding of moral distress – broader, that is, than the one commonly used within nursing-ethics and, more recently, healthcare ethics in general. On their proposed definition, moral distress is any self-directed negative attitude we might have in response to viewing ourselves as participating in a morally undesirable situation. While being in general agreement with much of what Campbell et al. say, I make two suggestions. First, in (...) order to distinguish moral distress that is specifically related to the roles and responsibilities of healthcare-workers from other kinds of moral distress, it would be useful for the broadened definition to contain an explicit reference to the distinctive situation and challenges faced by healthcare-workers. Second, whereas Campbell et al. write in a manner that suggests that there is very little that is positive or redeeming about moral distress, we should also ask if there is anything morally good about such distress. I suggest that the disposition to respond with moral distress to situations that call for it can plausibly be seen as a virtue on the part of healthcare-workers. The moral value of responses of appropriate moral distress is positive (because it is a display of virtue on the part of the healthcare-worker), whereas the state of affairs that moral distress is called for is bad and regrettable. (shrink)
The literature on ethics and user attitudes towards AVs discusses user concerns in relation to automation; however, we show that there are additional relevant issues at stake. To assess adolescents’ attitudes regarding the ‘car of the future’ as presented by car manufacturers, we conducted two studies with over 400 participants altogether. We used a mixed methods approach in which we combined qualitative and quantitative methods. In the first study, our respondents appeared to be more concerned about other aspects of AVs (...) than automation. Instead, their most commonly raised concerns were the extensive use of AI, recommender systems, and related issues of autonomy, invasiveness and personal privacy. The second study confirmed that several AV impacts were negatively perceived. The responses were, however, ambivalent. This confirms previous research on AV attitudes. On one hand, the AV features were perceived as useful, while on the other hand, their impacts were negatively assessed. We followed theoretical insights form futures studies and responsible research and innovation, which helped to identify that there are additional user concerns than what has been previously discussed in the literature on public attitudes and ethics of AVs, as well what has been envisioned by car manufactures. (shrink)
The concept of a digital twin comes from engineering.1 It refers to a digital model of an artefact in the real world, which takes data about the artefact itself, data about other such artefacts, among other things, as inputs. The idea is that the maintenance of artefacts—such as jet engines—can be vastly improved if we work with digital twins that simulate actual objects. Similarly, personalised medicine might benefit from the digital modelling of body parts or even whole human bodies. A (...) medical digital twin could use data about the patient, more general population data, and other inputs to generate predictions about the patient. This could lead to highly personalised interventions and nuanced judgments about the patient’s health. Matthias Braun2 discusses this intriguing prospect, asking how we should think about the way in which a digital twin could represent a patient. I will respond to Braun’s striking suggestion that we can regard a digital twin as an extension of the patient’s body. Notably, Braun does not compare his just-mentioned idea with the extended mind thesis popularised by Andy Clark and David Chalmers.3 But I am sure many readers will be reminded of the extended mind thesis. Accordingly, I will consider this comparison. I cannot discuss this comparison in detail, nor fully evaluate Braun’s suggestion. But I can say something about how we might approach this comparison, and provide some …. (shrink)
This book offers new readings of Kant’s “universal law” and “humanity” formulations of the categorical imperative. It shows how, on these readings, the formulas do indeed turn out being alternative statements of the same basic moral law, and in the process responds to many of the standard objections raised against Kant’s theory. Its first chapter briefly explores the ways in which Kant draws on his philosophical predecessors such as Plato (and especially Plato’s Republic) and Jean-Jacque Rousseau. The second chapter offers (...) a new reading of the relation between the universal law and humanity formulas by relating both of these to a third formula of Kant’s, viz. the “law of nature” formula, and also to Kant’s ideas about laws in general and human nature in particular. The third chapter considers and rejects some influential recent attempts to understand Kant’s argument for the humanity formula, and offers an alternative reconstruction instead. Chapter four considers what it is to flourish as a human being in line with Kant’s basic formulas of morality, and argues that the standard readings of the humanity formula cannot properly account for its relation to Kant’s views about the highest human good. (shrink)
Writers like Christine Korsgaard and Allen Wood understand Kant's idea of rational nature as an end in itself as a commitment to a substantive value. This makes it hard for them to explain the supposed equivalence between the universal law and humanity formulations of the categorical imperative, since the former does not appear to assert any substantive value. Nor is it easy for defenders of value-based readings to explain Kant's claim that the law-giving nature of practical reason makes all beings (...) with practical reason regard the idea of a rational nature as an end in itself. This article seeks to replace these value-based readings with a reading of the idea of rational nature as an end that fits better with the overall argument of the Groundwork. (shrink)
In fascinating recent work, Julian Savulescu and his various co-authors argue that human love is one of the things we can improve upon using biomedical enhancements. Is that so? This article first notes that Savulescu and his co-authors mainly treat love as a means to various other goods. Love, however, is widely regarded as an intrinsic good. To investigate whether enhancements can produce the distinctive intrinsic good of love, this article does three things. Drawing on Philip Pettit's recent discussion of (...) ‘robustly demanding goods’, it asks what exactly we intrinsically desire in seeking love; it considers four possible outcomes involving attachment-enhancements and attachments; and it considers two different pieces of news we might receive about our lovers' attachment to us (that it is, or that it is not, sustained with the help of enhancement-technologies). Enhancement-sustained attachment, it is concluded, is less desirable than the intrinsic good of love. (shrink)
In a recent article, Sabine Müller, Merlin Bittlinger, and Henrik Walter launch a sweeping attack against what they call the "personal identity debate" as it relates to patients treated with deep brain stimulation (DBS). In this critique offered by Müller et al., the so-called personal identity debate is said to: (a) be metaphysical in a problematic way, (b) constitute a threat to patients, and (c) use "vague" and "contradictory" statements from patients and their families as direct evidence for metaphysical theories. (...) In this response, I critically evaluate Müller et al.'s argument, with a special focus on these three just-mentioned aspects of their discussion. My conclusion is that Müller et al.'s overall argument is problematic. It overgeneralizes criticisms that may apply to some, but certainly not to all, contributions to what they call the personal identity-debate. Moreover, it rests on a problematic conception of what much of this debate is about. Nor is Müller et al.'s overall argument fair in its assessment of the methodology used by most participants in the debate. For these reasons, we should be skeptical of Müller et al.'s claim that the "personal identity debate" is a "threat to neurosurgical patients". (shrink)
John Harris discusses the problem of other minds, not as it relates to other human minds, but rather as it relates to artificial intelligences. He also discusses what might be called bilateral mind-reading: humans trying to read the minds of artificial intelligences and artificial intelligences trying to read the minds of humans. Lastly, Harris discusses whether super intelligent AI – if it could be created – should be afforded moral consideration, and also how we might convince super intelligent AI that (...) we ourselves should be treated with moral consideration. In this commentary, I discuss these issues brought up by Harris. I focus specifically on robots (rather than AI in general), and I set aside future super intelligent AI to instead focus on more limited forms of AI. I argue that the human tendency to attribute minds even to robots with very limited AI and whether such robots should be given moral consideration are more pressing issues than those that Harris discusses, even though I certainly agree with Harris that the potential for super intelligent AI is a fascinating topic to speculate about. (shrink)
How might emerging and future technologies—sex robots, love drugs, anti-love drugs, or algorithms to track, quantify, and ‘gamify’ romantic relationships—change how we understand and value love? We canvass some of the main ethical worries posed by such technologies, while also considering whether there are reasons for “cautious optimism” about their implications for our lives. Along the way, we touch on some key ideas from the philosophies of love and technology.
Hübner and White argue that we should not administer DBS to psychopathic prisoners. While we are sympathetic to their conclusion, we argue that the authors’ two central arguments for this conclusion are problematic. Their first argument appeals to an overly restrictive conception of individual medical benefit: namely, that an individual medical benefit must alleviate subjective suffering. We highlight cases that clearly constitute individual medical benefits although there is no relief of subjective suffering. The second argument depends on an overly restrictive (...) conception of the sort of motivation needed to ground consent to a medical procedure. It is also too quick to treat it as unproblematic to consider psychopaths as fully competent. We argue that this view overlooks certain kinds of internal motivation. It also overlooks the possibility that after successful activation of underactive brain-regions, a former psychopath might become a better representative of his or her “true self.” -/- . (shrink)
In Just Freedom, Pettit presents a powerful new statement and defense of the traditional “republican” conception of liberty or freedom. And he claims that freedom can serve as an ecumenical value with broad appeal, which we can put at the basis of a distinctively republican theory of justice. That is, Pettit argues that this “conception of freedom as non-domination allows us to see all issues of justice as issues, ultimately, of what freedom demands.” It is not, however, clear that liberty (...) is the only value that Pettit (a) actually appeals to and (b) should be appealing to. He seems to be as much a defender of relational equality and legal dignity as he is a defender of liberty. And he must either (it seems) make the implausible claim that the basic requirements of justice only apply to able-minded adults, or else admit that justice at bottom consists in something wider than just securing liberty as non-domination for all able-minded adults. For by his own admission, Pettit’s theory of justice as republican freedom “ignores issues of justice in relation to children and the intellectually disabled.” It would be better to say, therefore, that the promotion of freedom as non-domination constitutes one, but not the only, requirement of justice. (shrink)
Philip Pettit has identified some interesting apparent commonalities among core human values like love, friendship, virtue, and respect. These are all, Pettit argues, ‘robustly demanding’: they require us to provide certain benefits across ranges of alternative scenarios. Pettit also suggests a general ‘rationale’ for valuing such goods, which draws on his work on freedom. In this paper, I zoom in on love in particular. I critically assess whether Pettit’s schematic account of love’s value adequately captures what we typically value in (...) valuing love. And I scrutinize the analogy Pettit suggests between the rationale for valuing freedom and his rationale for valuing love. My conclusion is that whereas Pettit’s account of love and its value does not strictly speaking contain false propositions, it ends up being a somewhat skewed account of love’s value. Finally, I bring up some widely discussed aspects of love’s value not captured by Pettit’s account. (shrink)
It is commonly thought that on Kant’s view of action, ‘everyone always acts on maxims.’ Call this the ‘descriptive reading.’ This reading faces two important problems: first, the idea that people always act on maxims offends against common sense: it clashes with our ordinary ideas about human agency. Second, there are various passages in which Kant says that it is ‘rare’ and ‘admirable’ to firmly adhere to a set of basic principles that we adopt for ourselves. This article offers an (...) alternative: the ‘normative reading.’ On this reading, it is a normative ideal to adopt and act on maxims: it is one of the things Kant thinks we would do if our reason were fully in control of our decision-making. (shrink)
When Jennifer Blumenthal-Barby was a bioethics intern at the Cleveland Clinic while she was still a graduate student, she was puzzled by the decision making of some patients at the clinic. For exam...
Whereas the universal law formula says to choose one’s basic guiding principles (or “maxims”) on the basis of their fitness to serve as universal laws, the humanity formula says to always treat the humanity in each person as an end, and never as a means only. Commentators and critics have been puzzled by Kant’s claims that these are two alternative statements of the same basic law, and have raised various objections to Kant’s suggestion that these are the most basic formulas (...) of a fully justified human morality. This dissertation offers new readings of these two formulas, shows how, on these readings, the formulas do indeed turn out being alternative statements of the same basic moral law, and in the process responds to many of the standard objections raised against Kant’s theory. Its first chapter briefly explores the ways in which Kant draws on his philosophical predecessors such as Plato (and especially Plato’s Republic) and Jean-Jacque Rousseau. The second chapter offers a new reading of the relation between the universal law and humanity formulas by relating both of these to a third formula of Kant’s, the “Law of Nature” formula, and also to Kant’s ideas about laws in general and human nature in particular. The third chapter considers and rejects some influential recent attempts to understand Kant’s argument for the humanity formula, and offers an alternative reconstruction instead. Chapter four considers what it is to flourish as a human being in line with Kant’s basic formulas of morality, and argues that the standard readings of the humanity formula cannot properly account for its relation to Kant’s views about the highest human good. (shrink)
Persson argues that common sense morality involves various “asymmetries” that don’t stand up to rational scrutiny. (One example is that intentionally harming others is commonly thought to be worse than merely allowing harm to happen, even if the harm involved is equal in both cases.) A wholly rational morality would, Persson argues, be wholly symmetrical. He also argues, however, that when we get down to our most basic attitudes and dispositions, we reach the “end of reason,” at which point we (...) simply must accept our basic attitudes and dispositions as given, or as being beyond rational criticism. Since many of the “asymmetries” in our moral attitudes that Persson argues against depend on our most basic dispositions, his own overall framework implies that these asymmetries in our moral attitudes and dispositions are beyond rational criticism, and that we must simply accept them as given elements of human life. Persson therefore seemingly faces a choice: either he revises his view of the reach of reason, or else he must scale back his views about the degree to which our most basic moral attitudes are proper subjects of rational criticism. (shrink)