Autonomous and automatic weapons would be fire and forget: you activate them, and they decide who, when and how to kill; or they kill at a later time a target you’ve selected earlier. Some argue that this sort of killing is always wrong. If killing is to be done, it should be done only under direct human control. (E.g., Mary Ellen O’Connell, Peter Asaro, Christof Heyns.) I argue that there are surprisingly many kinds of situation where this is false and (...) where the use of Automated Weapons Systems would in fact be morally required. These include cases where a) once one has activated a weapon expected then to behave lethally, it would be appropriate to let it continue because this is part of a plan whose goodness one was best positioned to evaluate before activating the weapon; b) one expects better long-term consequences from allowing it to continue; c) allowing it to continue would express a decision you made to be resolute, a decision that could not have advantaged you had it not been true that you would carry through with it; d) the weapon is mechanically not recallable, so that, to not allow it to carry through, you would have had to refrain from activating it in the first place, something you expected would have disastrous consequences; e) you must deputize necessary killings to autonomous machines in order to protect yourself from guilt you shouldn’t have to bear; f) it would be morally better for the burden of responsibility for the killing to be shared among several agents, and the agents deputizing killing to machines can do this, especially where it’s not predictable which machine will be successful; g) a killing would be morally better done with elements of randomness and lack of deliberation, and a (relatively stupid) machine could do this where a person could not; h) the machine would be acting as a Doomsday Device, so that it could not have had its hoped for deterrent effect had you not ensured that you would be unable to recall it if enemy action activated it; i) letting it carry through is a necessary part of its own learning process, and you expect that this learning will have salutary effects later on; j) human intervention in the machine’s operation would disastrously impair its precision, or its speed and efficiency; k) using non-automated methods would require human resources you just don’t have in a task that nevertheless must be done (e.g., using land-mines to protect remote installations); l) the weapon has such horrible and indiscriminate power that it is doubtful whether it could be actually used in ways compatible with International Humanitarian Law and the Laws of War, which require that weapons be used only in ways respecting distinctness, necessity and proportionality, but its threat of use could respect these principles in affording deterrence provided human error cannot lead to their accidental deployment, this requiring that they be controlled by carefully designed autonomous and automatic systems. I then consider objections based on conceptions of human dignity and find that very often dignity too is best served by autonomous machine killing. Examples include saving your village by activating a robot to kill invading enemies who would inflict great indignity on your village, using a suicide robot to save yourself from a less dignified death at enemy hands, using a robotic drone to kill someone otherwise not accessible in order to restore dignity to someone this person killed and to his family, and using a robot to kill someone who needs killing, but the killing of whom by a human executioner would soil the executioner’s dignity. I conclude that what matters in rightful killing isn’t necessarily that it be under the direct control of a human, but that it be under the control of morality; and that could sometimes require use of an autonomous or automated device. (This paper was formerly called "Fire and Forget: A Defense of the Use of Autonomous Weapons in War" on Philpapers; the current title is the title of the published version.). (shrink)
I argue that Gauthier's constrained-maximizer rationality is problematic. But standard Maximizing Rationality means one's preferences are only rational if it would not maximize on them to adopt new ones. In the Prisoner's Dilemma, it maximizes to adopt conditionally cooperative preferences. (These are detailed, with a view to avoiding problems of circularity of definition.) Morality then maximizes. I distinguish the roles played in rational choices and their bases by preferences, dispositions, moral and rational principles, the aim of rational action, and rational (...) decision rules. I argue that Maximizing Rationality necessarily structures conclusive reasons for action. Thus conations of any sort can base rational choices only if the conations are structured like a coherent preference function; rational actions maximize on such functions. Maximization-constraining dispositions cannot integrate into a coherent preference function. (shrink)
The indexical thesis says that the indexical terms, “I”, “here” and “now” necessarily refer to the person, place and time of utterance, respectively, with the result that the sentence, “I am here now” cannot express a false proposition. Gerald Vision offers supposed counter-examples: he says, “I am here now”, while pointing to the wrong place on a map; or he says it in a note he puts in the kitchen for his wife so she’ll know he’s home even though he’s (...) gone upstairs for a nap, but then he leaves the house, forgetting to remove the note. The first sentence is false by virtue of “here” not necessarily referring to the place of utterance, the second sentence, by virtue of “now” not necessarily referring to the time of utterance. We argue that these sentences express falsehoods only because the terms are being used demonstratively, not indexically – the distinction pertains not to words simpliciter, but to uses of words. When used indexically, the terms refer in accord with the indexical thesis; but when used demonstratively, their referents depend on how devices of ostension are used with their utterance – pointings, and the like. Thus Vision’s first sentence really says, “I am there now”, referring to the place on the map the finger is pointing to. As for his second sentence, we distinguish the time of utterance or production of a sentence from the time of its uptake. Due to the pragmatics of interpretation, the sentence really says “I” – the person ‘uttering’ the note – “am here” – here where the note is, with the note serving as a kind of proxy ‘finger’ – “now” – where “now” refers to the time of uptake of the note, i.e., when it is read. “I” refers indexically, “here”, demonstratively, and “now”, indexically, but indexically to the time of uptake. Since the sentence is not purely indexical, its falsehood doesn’t threaten the indexical thesis. A similar treatment is given of teletyped messages about the typer’s location. (shrink)
While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as (...) possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about. (shrink)
To the normal reasons that we think can justify one in preferring something, x (namely, that x has objectively preferable properties, or has properties that one prefers things to have, or that x's obtaining would advance one's preferences), I argue that it can be a justifying reason to prefer x that one's very preferring of x would advance one's preferences. Here, one prefers x not because of the properties of x, but because of the properties of one's having the preference (...) for x. So-revising one's preferences is rational in paradoxical choice situations like Kavka's Deterrence Paradox. I then try to meet the following objections: that this is stoicist, incoherent, bad faith; that it conflates instrumental and intrinsic value, gives wrong solutions to the problems presented by paradoxical choice situations, entails vicious regresses of value justification, falsifies value realism, makes valuing x unresponsive to x's properties, causes value conflict, conflicts with other standards of rationality, violates decision theory, counsels immorality, makes moral paradox, treats value change as voluntary, conflates first- and second-order values, is psychologically unrealistic, and wrongly presumes that paradoxical choice situations can even occur. (shrink)
David Gauthier claims that it can be rational to co-operate in a prisoner's dilemma if one has adopted a disposition constraining one's self from maximizing one's individual expected utility, i.e., a constrained maximizer disposition. But I claim cooperation cannot be both voluntary and constrained. In resolving this tension I ask what constrained maximizer dispositions might be. One possibility is that they are rationally acquired, irrevocable psychological mechanisms which determine but do not rationalize cooperation. Another possibility is that they are rationally (...) acquired preference-functions rationalizing cooperation as maximizing. I argue that if they are the first thing, then their adoption fails to make co-operation rational even if, as Gauthier also claims, actions are rational if they express rational dispositions. I then suggest that taking constrained maximizer dispositions to be things of the second sort would result in them being able to make co-operation rational, and that so-taking them therefore serves the bulk and spirit of Gauthier's larger claims, which I reconstruct accordingly. (shrink)
Theories of practical rationality say when it is rational to form and fulfill intentions to do actions. David Gauthier says the correct theory would be the one our obeying would best advance the aim of rationality, something Humeans take to be the satisfaction of one’s desires. I use this test to evaluate the received theory and Gauthier’s 1984 and 1994 theories. I find problems with the theories and then offer a theory superior by Gauthier’s test and immune to the problems. (...) On this theory, it is rational to treat something different as the aim when doing so would advance the original aim. I argue that the idea that this would be irrational bad faith entails contradictions and so is false, as must be theories saying that rationally we must always treat as the aim the bringing about of objectively good states of affairs or obeying a universalizable moral code. (Note: the published version differs somewhat from the version on the website of the Center for Ethics and the Rule of Law; please quote from the published version.). (shrink)
David Gauthier thinks agents facing a prisoner's dilemma ('pd') should find it rational to dispose themselves to co-operate with those inclined to reciprocate (i.e., to acquire a constrained maximizer--'cm'--disposition), and to co-operate with other 'cmers'. Richmond Campbell argues that since dominance reasoning shows it remains to the agent's advantage to defect, his co-operation is only rational if cm "determines" him to co-operate, forcing him not to cheat. I argue that if cm "forces" the agent to co-operate, he is not acting (...) at all, never mind rationally. Thus, neither author has shown that co-operation is rational action in a pd. (shrink)
If one can get the targets of one's current wants only by acquiring new wants (as in the Prisoner's Dilemma), is it rational to do so? Arguably not. For this could justify adopting unsatisfiable wants, violating the rational duty to maximize one's utility. Further, why cause a want's target if one will not then want it? And people "are" their wants. So if these change, people will not survive to enjoy their wants' targets. I reply that one rationally need not (...) advance one's future wants, only current ones. Furthermore, rational choice seeks not utility (the co-obtaining of a want and its target), but satisfaction (the eventual obtaining of what is now wanted) -- otherwise, it would be irrational to care now about what happens after one dies. Finally, persons survive "rational" changes of values. Thus reflection on the rational revision of values illuminates the conditions on personal identity and the bases and aims of rational choice. (shrink)
Chrisoula Andreou says procrastination qua imprudent delay is modeled by Warren Quinn’s self-torturer, who supposedly has intransitive preferences that rank each indulgence in something that delays his global goals over working toward those goals and who finds it vague where best to stop indulging. His pair-wise choices to indulge result in his failing the goals, which he then regrets. This chapter argues, contra the money-pump argument, that it is not irrational to have or choose from intransitive preferences; so the agent’s (...) delays are not imprudent, not instances of procrastination. Moreover, the self-torturer case is intelligible only if there is no vagueness and if the agent’s preferences are transitive. But then he would delay only from ordinary weakness of will. And when it is vague where best to stop indulging, rational agents would use symmetry-breaking techniques; so, again, any procrastination would be explained by standard weakness of will, not vagueness. (shrink)
This chapter criticizes several methods of responding to the techniques foreign powers are widely acknowledged to be using to subvert U.S. elections. It suggests that countries do this when they have a legitimate stake in each other’s political deliberations, but no formal voice in them. It also suggests that if they accord each other such a voice, they will engage as co-deliberators with arguments, rather than trying to undermine each other’s deliberative processes; and that this will be salutary for all (...) parties. It moots several methods for giving nations such a voice, ranging from inviting representatives of foreign powers to participate in debates in each other’s high-level elections, to having representatives of all nations vote in each other’s key elections or legislative bodies, or in international bodies constituted in recognition of the need for binding global deliberation about shared issues. (shrink)
For the tradition, an action is rational if maximizing; for Gauthier, if expressive of a disposition it maximized to adopt; for me, if maximizing on rational preferences, ones whose possession maximizes given one's prior preferences. Decision and Game Theory and their recommendations for choice need revamping to reflect this new standard for the rationality of preferences and choices. It would not be rational when facing a Prisoner's Dilemma to adopt or co-operate from Amartya Sen's "Assurance Game" or "Other Regarding" preferences. (...) But there are preferences which it maximizes to adopt and co-operate from. (shrink)
Scientific Realists argue that it would be a miracle if scientific theories were getting more predictive without getting closer to the truth; so they must be getting closer to the truth. Van Fraassen, Laudan et al. argue that owing to the underdetermination of theory by data (UDT) for all we know, it is a miracle, a fluke. So we should not believe in even the approximate truth of theories. I argue that there is a test for who is right: suppose (...) we are at the limit of inquiry. Suppose that we then have all the logically possible theories that are adequate to all the actual data. If they all resembled in their theoretical claims, since one of them must be true, all of them would then resemble it, whichever it is. We would thus be justified in saying they all approximated the truth in the degree to which they co-resembled. If they don't all co-resemble, the SRs are wrong; more predictive theories are not necessarily closer to the theoretical truth. Prior to the limit, if, in spite of our best efforts to the contrary, all the theories we can make adequate to current data tend to co-resemble, we have inductive warrant for thinking more predictive theories are closer to the truth. If they don't resemble, we have inductive warrant for thinking that more predictive theories are not necessarily closer to the truth. (shrink)
Gauthier claims: (1) a non-maximizing action is rational if it maximized to intend it. If one intended to retaliate in order to deter an attack, (2) retaliation is rational, for it maximized to intend it. I argue that even on sympathetic theories of intentions, actions and choices, (1) is incoherent. But I defend (2) by arguing that an action is rational if it maximizes on preferences it maximized to adopt given one's antecedent preferences. (2) is true because it maximized to (...) adopt preferences on which it maximizes to retaliate. I thus save the theory that rational actions must maximize, and extend it into the rational criticism of preferences. (shrink)
James Sterba describes the egoist as thinking only egoist reasons decide the rationality of choices of action, the altruist, only altruistic reasons, that each in effect begs the question of what reasons there are against the other, and that the only non-question-begging and therefore rationally defensible position in this controversy is the middle-ground position that high-ranking egoistic reasons should trump low ranking-altruistic considerations and vice versa, this position being co-extensive with morality. Therefore it is rationally obligatory choose morally. I object (...) that the mere fact that a position is intermediate between two extremes does not mean it isn’t question-begging; that Sterba’s style of argument could be used to prove anything and therefore proves nothing; that it can be used to prove obvious falsehoods and therefore doesn’t necessarily track the truth; that it can be used to prove the truth of contingent, empirically obvious falsehoods when, since it is necessary a prior that one ought to be moral, something can be a good argument for the rationality of morality only if the argument’s style would entail only truths necessary a priori; that Sterba’s argument cannot inherit plausibility from what Sterba describes as the decision theoretic idea that when choosing among options where we have no evidence that one is more appropriate than the other, we must treat them as equally choice-worthy, since there is no such idea in decision theory, and shouldn’t be (for when, for example, there is no evidence that x exists and no evidence that x does not exist, one should believe that x does not exist; one should not chose as if x’s existence and non-existence were equally likely); that Sterba’s argument style is not analogous to the compromise strategies recommended in bargaining theory, nor in negotiating situations (although it would profit Sterba to consider David Gauthier’s approach in seeking to demonstrate that morality is both a middle ground between egoism and altruism, and is rationally obligatory); that it is problematic to see egoistic and altruistic reasons as commensurable and therefore admitting of a middle ground, especially a unique middle ground; that in any case, egoistic and altruistic reasons are not exhaustive of the reasons there could be; that the only sense in which moving to middle ground results in the parties not begging the question against each other is that it means they would be agreeing with each other and therefore not holding positions against each other, whether question-beggingly or otherwise, a fact which offers neither any rationally compelling reason to move her position closer to that of the other (for how can the mere fact that if we agreed we wouldn’t be begging the question against each other be a reason to agree?); and that even if morality is both rationally obligatory and a middle ground between egoism and altruism, it won’t be in any interesting sense true that this holds because the alternative would be question-begging, which means that analyzing the basis of the rationality of morality as being found in this principle of argumentation theory misconceives the nature of morality. (shrink)
David Gauthier suggested that all genuine moral problems are Prisoners Dilemmas (PDs), and that the morally and rationally required solution to a PD is to co-operate. I say there are four other forms of moral problem, each a different way of agents failing to be in PDs because of the agents’ preferences. This occurs when agents have preferences that are malevolent, self-enslaving, stingy, or bullying. I then analyze preferences as reasons for action, claiming that this means they must not target (...) the impossible, they must be able to be acted on in the circumstances, their targets must be attainable, and having the preferences must make their targets more likely. For groups of agents to have a distribution of preferences, their preferences must jointly have those four features, this imposing a kind of universalizability requirement on possible preferences. I then claim that, if all agents began with preferences satisfying these requirements, their preferences would not be of the morally problematic sort (on pain, variously, of circularity or contradiction in the specification of their targets). Instead, they would be either morally innocent preferences, or ones which put the agents in PDs. And it would then be instrumentally rational for the agents to prefer mutual co-operation. Thus if all agents initially had rationally permissible preferences and made rational choices of actions and preferences thereafter, they would never acquire immoral preferences, and so never be rationally moved to immoral actions. Further, the states of affairs such agents would be moved to bring about would be compatible with what Rawls’ agents would chose behind a veil of ignorance. Morality therefore reduces to rationality; necessarily, the actions categorically required by morality are also categorically required by rationality. (shrink)
Susan Okin read Robert Nozick as taking it to be fundamental to his Libertarianism that people own themselves, and that they can acquire entitlement to other things by making them. But she thinks that, since mothers make people, all people must then be owned by their mothers, a consequence Okin finds absurd. She sees no way for Nozick to make a principled exception to the idea that people own what they make when what they make is people, concluding that Nozick’s (...) theory of entitlement must be false, and that entitlement must instead be rooted in people’s needs. I say Okin misreads Nozick’s Libertarianism. In fact, its fundamental principle is that, simply by being persons, people are entitled to the maximum negative liberty compatible with a like liberty for all persons. Further, Nozick, and Jan Narveson, who has taken on the advocacy of Libertarian ideas, analyze liberty as freedom to interact with things, and analyze being entitled to or having property in something, as freedom to interact with it, to determine what may be done with it. People therefore have such freedom to do what they want with themselves, and such freedom to do what they want with other things, as is compatible with all persons having similar freedom. The former is what self-ownership amounts to, the latter, ownership of other things. Libertarianism’s fundamental principle therefore both grounds and delimits entitlements in ways entailing that mothers don’t own persons by dint of making them. Otherwise, since it would then be the prerogative of mothers to determine what shall be done with the persons they made, the persons made would lack equal liberty, this violating the fundamental principle. (shrink)
David Braybrooke argues that meeting people’s needs ought to be the primary goal of social policy. But he then faces the problem of how to deal with the fact that our most pressing needs, needs to be kept alive with resource-draining medical technology, threaten to exhaust our resources for meeting all other needs. I consider several solutions to this problem, eventually suggesting that the need to be kept alive is no different in kind from needs to fulfill various projects, and (...) that needs may have a structure similar to rights, with people’s legitimate needs serving as constraints on each other’s entitlements to resources. This affords a set of axioms constraining possible needs. Further, if, as Braybrooke thinks, needs are created by communities approving projects, so that the means to prosecute the projects then come to count as needs, then communities are obliged to approve only projects that are co-feasible given the world’s finite resources. The result is that it can be legitimate not to funnel resources towards endless life-prolongation projects. (shrink)
Wittgenstein taught us that there could not be a logically private language— a language on the proper speaking of which it was logically impossible for there to be more than one expert. For then there would be no difference between this person thinking she was using the language correctly and her actually using it correctly. The distinction requires the logical possibility of someone other than her being expert enough to criticize or corroborate her usage, someone able to constitute or hold (...) her to a standard of proper use. I shall explore the possibility of something opposite- sounding about laws, namely, that there could in principle be laws whose existence, legitimacy, goodness, and efficacy depend upon their being private, in this sense: their existence is kept secret from those who legitimately benefit from the laws and yet who would misguidedly destroy them were they to come to know of them; and it is kept secret from those who would illegitimately benefit from being able to circumvent the laws, and who could circumvent them if they knew of them. The secrecy of the laws increases their efficacy against bad behavior, and since were the public to come to know of these laws the public would lose its nerve and demand that the laws be rescinded, it prevents the public from destroying laws that are in fact in the public interest. These laws are therefore in a way logically private: they cannot at the same time exist, have the foregoing virtues, and be public. After proposing conditions under which such laws ought to be enacted, I moot logical objections to the very idea that there could be such laws, practical objections to their workability, and moral objections to their permissibility. I conclude by suggesting that, while we normally think of secret laws as creatures of the executive branch, things functionally equivalent to secret laws could also be created by other branches of government and societal institutions, and that all of this would be compatible with the form of sovereignty that is democratically grounded in the will and interests of the people. (shrink)
Gauthier and Hobbes reduce Prisoners Dilemmas to co-ordination problems (CPs). Many think rational, face-to-face agents can solve any CP by agreed fiat. But though an agent can rationally use a symmetry-breaking technique (ST) to decide between equal options, groups cannot unless their members' STs luckily converge. Failing this, the CP is escapable only by one agent's non-rational stubbornness, or by the group's "conquest" by an outside force. Implications: one's strategic rationality is group-relative; there are some optimums groups in principle cannot (...) rationally choose; thus justice cannot always be a rationally contracted optimum. Howard Sobel provides the point of departure. (shrink)
Quentin Smith argues that if God exists, He had a duty to ensure life's existence; and He couldn't rationally have done so and made a big bang unless a counter-factual like "If God had made a big bang, there would have been life," was true pre-creation. But such counter-factuals are not true pre-creation. I argue that God could have made a big bang without irrationality; and that He could have ensured life without making big bangs non-random. Further, a proper understanding (...) of the truth-conditions of counter-factuals like the one above lets them have determinate truth-values pre-creation. But the explanation of how the above counter-factual can be true pre-creation is more complicated than that offered by William Lane Craig. (shrink)
This paper argues that culture itself can be a weapon against the disentitled within cultures, and against members of other cultures; and when cultures are unjust and hegemonic, the theft of and destruction of elements of their culture can be a justifiable weapon of self-defense by the oppressed. This means that in at least some conflicts, those that are really insurgencies against oppression, such theft and destruction should not be seen as war crimes, but as legitimate military maneuvers. The paper (...) also argues that in general it is better for wars to be prosecuted by the theft and destruction of cultural property rather than by means of killing and debasing of lives, so that, again, these things should not be disincentivized by being classed as war crimes, but in fact should be the preferred methods of war. This makes it all the more problematic to have these things counted as war crimes when killing and rape are not. In the course of these arguments, the distinction is made between people and their culture; and the question is mooted whether the destruction of cultural artifacts is an evil, and if so, how great an evil. Finally, an argument is given against the view that it is wrong for art and culture experts to give assessments for the value of artifacts because this will be the enabling of the theft and destruction of artifacts and their cultures. If we do not place value on things, we cannot know what is most good and so most worth preserving in cultures and their artifacts. So we must carry on with judging, and then make sure we act to prevent the exploitation of the things we have rightly come to value. (shrink)
Discusses nuances required to balance out the debate surrounding the moral and legal permissibility of using autonomous weapon systems in war fighting.
Ideal rule utilitarianism says that a moral code C is correct if its acceptance maximizes utility; and that right action is compliance with C. But what if we cannot accept C? Rawls and L. Whitt suggest that C is correct if accepting C maximizes among codes we can accept; and that right action is compliance with C. But what if merely reinforcing a code we can't accept would maximize? G. Trianosky suggests that C is correct if reinforcing it maximizes; and (...) that right action is action that has the effect of reinforcing compliance with C. I object to this and argue that C is correct if both accepting and reinforcing C would maximize and if C is reinforcible; and that right action consists in coming as close as possible to perfect acceptance of and compliance with C. (shrink)
Ken Warmbrod thinks Quine agrees that translation is determinate if it is determinate what speakers would say in all possible circumstances; that what things would do in merely possible circumstances is determined by what their subvisible constituent mechanisms would dispose them to do on the evidence of what alike actual mechanisms make alike actual things do actually; and that what speakers say is determined by their neural mechanisms. Warmbrod infers that people's neural mechanisms make translation of what people say determinate. (...) I argue that the evidence of what alike actual mechanisms make alike actual things do actually, underdetermines what our neural mechanisms would make us say in merely possible circumstances. So translation is indeterminate. And so too are the dispositions of physical mechanisms. (shrink)
The question of whether new rules or regulations are required to govern, restrict, or even prohibit the use of autonomous weapon systems has been the subject of debate for the better part of a decade. Despite the claims of advocacy groups, the way ahead remains unclear since the international community has yet to agree on a specific definition of Lethal Autonomous Weapon Systems and the great powers have largely refused to support an effective ban. In this vacuum, the public has (...) been presented with a heavily one-sided view of Killer Robots. This interdisciplinary volume presents a more nuanced approach to autonomous weapon systems that recognizes the need to progress beyond a discourse framed by the Terminator and HAL 9000. Re-shaping the discussion around this emerging military innovation requires a new line of thought and a willingness to challenge the orthodoxy. Lethal Autonomous Weapons focuses on exploring the moral and legal issues associated with the design, development and deployment of lethal autonomous weapons. The volume brings together some of the most prominent academics and academic-practitioners in the lethal autonomous weapons space and seeks to return some balance to the debate. As part of this effort, it recognizes that society needs to invest in hard conversations that tackle the ethics, morality, and law of these new digital technologies and understand the human role in their creation and operation. (shrink)
Hume said that the reasons that determine the rationality of one's actions are the desires one has when acting: one's actions are rational iff they advance these desires. Thomas Nagel says this entails calling rational, actions absurdly conflicting in aims over time. For one might have reason, in one's current desires, to begin trying to cause states one foresees having reason, in one's foreseen desires, to prevent. Instead, then, real reasons must be timeless, so that current and foreseen reasons cannot (...) conflict. I say the desire theory does not have absurd consequences. A rational agent's desires would rationally evolve, never requiring actions conflicting in aims over time, except where it was instrumentally rational for her to change in her desires, whence such conflicts are rationally appropriate. Further, whatever sorts of things count as real reasons, since reasons can rationally require their own revision, they cannot be necessarily timeless. (shrink)
I reject three theories of practical reason according to which a rational agent's ultimate reasons for acting must be unchanging: that one is rationally obliged in each choice (1) to be prudent--to advance all the desires one foresees ever having (the self-interest theory), rather than just those one has at the time of choice, or (2) to cause states of affairs that are good by some timeless, impersonal measure (Thomas Nagel), or (3) to obey permanent, universalizable deontic principles (Kant). Whether (...) a rational agent's reasons consist in her desires, in the goodness of certain states, or in deontic principles, her reasons now can ask her to take different, conflicting things as reasons later; and contradiction results of rationally obliging her not to take the new things for reasons. (shrink)