Sharon Street’s 2006 article “A Darwinian Dilemma for Realist Theories of Value” challenges the epistemological pretensions of the moral realist, of the nonnaturalist in particular. Given that “Evolutionary forces have played a tremendous role in shaping the content of human evaluative attitudes” – why should one suppose such attitudes and concomitant beliefs would track an independent moral reality? Especially since, on a nonnaturalist view, moral truth is causally inert. I abstract a logical skeleton of Street’s argument and, with its aid, (...) focus on problematic assumptions regarding the (a)causality of moral truth. It emerges that there are acquired causal powers that compensate for the intrinsic impotence of moral truth, as well as two distinct levels at which truth-tracking might occur. I argue that while evolution’s selective forces do not track moral truth, that does not imply individual organisms could not have evolved that capability. -/- . (shrink)
The question “Why should I be moral?” has long haunted normative ethics. How one answers it depends critically upon one’s understanding of morality, self-interest, and the relation between them. Stephen Finlay, in “Too Much Morality”, challenges the conventional interpretation of morality in terms of mutual fellowship, offering instead the “radical” view that it demands complete altruistic self-abnegation: the abandonment of one’s own interests in favor of those of any “anonymous” other. He ameliorates this with the proviso that there is no (...) rational basis for morality’s presumption of precedence, leaving it up to each person to decide when and whether they prefer self-interested concerns to more stringent moral requirements. I counter Finlay’s radical altruism with fair egalitarianism, a more congenial interpretation of moral normativity that repudiates self-abnegation and holds instead that ceteris paribus everybody’s interests are equal. As a result, supererogation and moral sainthood become more intelligible, and the choice between self-interest and morality becomes one between different decision procedures, the particular advantage of morality being others compatible results. (shrink)
There are Humeans and unHumeans, disagreeing as to the validity of the Treatise’s ideas regarding practical reason, but not as to their importance. The basic argument here is that the enduring irresolution of their Hume centric debates has been fostered by what can be called the fallacy of normative monism, i.e. a failure to distinguish between two different kinds of normativity: empirical vs. rational. Humeans take the empirical normativity of personal desire to constitute the only real kind, while unHumeans insist (...) that only the objective rationality associated with categorical morality can provide reliable normative guidance. In turn, the failure to recognize the dual nature of normativity has helped engender motivational obscurantism: as essentially causal notions, motive and motivation obscure the rational processes that lie at the heart of deliberation and choice. Once it is realized that normativity takes two different forms, each with its own distinctive role, it becomes possible to mediate if not mitigate the differences between Humeans and unHumeans. Choice will be the key to understanding practical reasoning, and its analysis will provide the basis for a belief/desire model that upends conventional wisdom regarding motivation and desire. (shrink)
It is generally supposed that borderline cases account for the tolerance of vague terms, yet cannot themselves be sharply bounded, leading to infinite levels of higher order vagueness. This higher order vagueness subverts any formal effort to make language precise. However, it is possible to show that tolerance must diminish at higher orders. The attempt to derive it from indiscriminability founders on a simple empirical test, and we learn instead that there is no limit to how small higher order tolerance (...) may become. That means there is no limit to how precisely we may draw the boundaries of borderline cases, thus forestalling any requirement for higher order vagueness. (shrink)
It is natural to oppose morality and self-interest; it is customary also to oppose morality to interests as such, an inclination encouraged by Kantian tradition. However, if “interest” is understood simply as what moves a person to do this rather than that, then – if persons ever actually are good and do what is right – there must be moral interests. Bradley, in posing the “Why should I be moral?” question, raises Kant-inspired objections to the possibility of moral interests qua (...) particular, conditional causes. The paper argues that these objections can be met if (a) one distinguishes between what makes something right and what makes something right happen, and (b) doing what is right is intrinsic to a person’s interests and not merely a means to ulterior ends. The requisite completeness of rational morality is shown to exclude pluralistic approaches. Given rational monism, people can find intrinsic advantage in morality’s justifiability, cooperativeness and communality. (shrink)
The argument that follows has a certain air of prestidigitation about it. I attempt to show that, given a couple of innocent-seeming suppositions, it is possible to derive a positive and complete theory of normative ethics from the Humean maxim "You can't get ought from is." This seems, of course, absurd. If the reasoning isn't completely unhinged, you may be sure, the trick has to lie in those "innocent-seeming" props. And, in fact, you are right. But every argument has to (...) begin somewhere, and, however questionable, those suppositions just don't seem to harbor serious normative import. (shrink)
These reflections are an attempt to get to the heart of the "reason is the slave of the passions" debate. The whole point of deliberation is to arrive at a choice. What factors persons find to be choice-relevant is a purely empirical matter. This has significant consequences for the views of Hume, Williams, Nagel, Parfit and Korsgaard regarding practical reason.
The realist belief in robustly attitude-independent evaluative truths – more specifically, moral truths – is challenged by Sharon Street’s essay “A Darwinian Dilemma for Realist Theories of Value”. We know the content of human normative beliefs and attitudes has been profoundly influenced by a Darwinian natural selection process that favors adaptivity. But if simple adaptivity can explain the content of our evaluative beliefs, any connection they might have with abstract moral truth would seem to be purely coincidental. She continues the (...) skeptical attack in “Objectivity and Truth: You’d Better Rethink It”, concentrating on the intuitionist realism of Ronald Dworkin. The latter sees the issue fundamentally as a holistic choice between moral objectivity and the genocide-countenancing consequences of abandoning objective standards. Street counters that, because of realism’s skeptical difficulties, Dworkin’s Choice (as I call it) actually works in favor of her Euthyphronic antirealism. I will argue that she misrepresents the realist’s skeptical challenge, and that clarifying the character of that challenge renders the case for normative realism much more appealing. Indeed, I claim that Street fails to exclude the genuine possibility of a rational basis for moral truth. (shrink)
Contemporary discussions do not always clearly distinguish two different forms of vagueness. Sometimes focus is on the imprecision of predicates, and sometimes the indefiniteness of statements. The two are intimately related, of course. A predicate is imprecise if there are instances to which it neither definitely applies nor definitely does not apply, instances of which it is neither definitely true nor definitely false. However, indefinite statements will occur in everyday discourse only if speakers in fact apply imprecise predicates to such (...) indefinite instances. (What makes an instance indefinite is, it should be clear, predicate-relative.) The basic issue in the present inquiry is whether this indefiniteness ever really occurs; the basic question is, Why should it ever occur? (shrink)
Is 'vague' vague? Is the meaning of 'true' vague? Is higher-order vagueness unavoidable? Is it possible to say precisely what it is to say something precisely? These questions, deeply interrelated and of fundamental importance to logic and semantics, have been addressed recently by Achille Varzi in articles focused on an ingenius attempt by Roy Sorensen ("An Argument for the Vagueness of 'Vague'") to demonstrate that 'vague' is vague.
Varzi has recently joined a thread of arguments originating in an attempt by Sorensen (1985) to demonstrate that the predicate ‘vague’ is itself vague. Sorensen's conclusion is significant in that it has provided the basis for a subsequent effort by Hyde (1994) to defend the legitimacy of supposing higher-order vagueness. Varzi's contribution to this debate is twofold. First, contra earlier criticism by Deas (1989), he claims that Sorensen's result is sound so far as it goes. Second, he argues that despite (...) this it cannot be used as Hyde wishes on pain of circularity. I am not interested in the latter argument—it is examined in Hyde (2003)—but rather wish to defend and elaborate Deas's criticism of Sorensen against Varzi's repudiation. (shrink)
. According to Horgans transvaluationist approach, the robustness that characterizes vague terms is inherently incoherent. He analyzes that robustness into two conceptual poles, individualistic and collectivistic, and ascribes the incoherence to the former. However, he claims vague terms remain useful nonetheless, because the collectivistic pole can be realized with a suitable non-classical logic and can quarantine the incoherence arising out of the individualistic pole. I argue, on the contrary, that the nonclassical logic fails to resolve the difficulty and that the (...) incoherence afflicts Horgans collectivistic pole as well, consequently invalidating the entire transvaluationist approach. An alternative, coherent conception of robustness is suggested. (shrink)