A provocative ontological-cum-semantic position asserting that the right ontology is austere in its exclusion of numerous common-sense and scientific posits and that many statements employing such posits are nonetheless true. The authors of Austere Realism describe and defend a provocative ontological-cum-semantic position, asserting that the right ontology is minimal or austere, in that it excludes numerous common-sense posits, and that statements employing such posits are nonetheless true, when truth is understood to be semantic correctness under contextually operative semantic standards. Terence (...) Horgan and Matjaz Potrc argue that austere realism emerges naturally from consideration of the deep problems within the naive common-sense approach to truth and ontology. They offer an account of truth that confronts these deep internal problems and is independently plausible: contextual semantics, which asserts that truth is semantically correct affirmability. Under contextual semantics, much ordinary and scientific thought and discourse is true because its truth is indirect correspondence to the world. After offering further arguments for austere realism and addressing objections to it, Horgan and Potrc consider various alternative austere ontologies. They advance a specific version they call "blobjectivism"--the view that the right ontology includes only one concrete particular, the entire cosmos, which, although it has enormous local spatiotemporal variability, does not have any proper parts. The arguments in Austere Realism are powerfully made and concisely and lucidly set out. The authors' contentions and their methodological approach--products of a decade-long collaboration--will generate lively debate among scholars in metaphysics, ontology, and philosophy. (shrink)
I raise skeptical doubts about the prospects of Bayesian formal epistemology for providing an adequate general normative model of epistemic rationality. The notion of credence, I argue, embodies a very dubious psychological myth, viz., that for virtually any proposition p that one can entertain and understand, one has some quantitatively precise, 0-to-1 ratio-scale, doxastic attitude toward p. The concept of credence faces further serious problems as well—different ones depending on whether credence 1 is construed as full belief or instead is (...) construed as absolute certainty. I argue that the notion of an “ideal Bayesian reasoner” cannot serve as a normative ideal that actual human agents should seek to emulate as closely as they can, because different such reasoners who all have the same evidence as oneself—no single one them being uniquely psychologically most similar to oneself—will differ from one another in their credences. I argue that epistemic probability, properly understood, is quantitative degree of evidential support relative to one’s evidence, and that principled epistemic probabilities arise only under quite special evidential circumstances—which means that epistemic probability is ill suited to figure centrally within general norms of human epistemic rationality. (shrink)
It has often been thought that our knowledge of ourselves is _different_ from, perhaps in some sense _better_ than, our knowledge of things other than ourselves. Indeed, there is a thriving research area in epistemology dedicated to seeking an account of self-knowledge that would articulate and explain its difference from, and superiority over, other knowledge. Such an account would thus illuminate the descriptive and normative difference between self-knowledge and other knowledge.<sup>1</sup> At the same time, self- knowledge has also encountered its (...) share of skeptics – philosophers who refuse to accord it any descriptive, let alone normative, distinction. In this paper, we argue that there is at least one _species_ of self-knowledge that is different from, and better than, other knowledge. It is a specific kind of knowledge of one’s concurrent phenomenal experiences. Call knowledge of one’s own phenomenal experiences _phenomenal knowledge_. Our claim is that some (though not all) phenomenal knowledge is different from, and better than, non-phenomenal knowledge. In other. (shrink)
Metaethics, understood as a distinct branch of ethics, is often traced to G. E. Moore's 1903 classic, Principia Ethica. Whereas normative ethics is concerned to answer first order moral questions about what is good and bad, right and wrong, metaethics is concerned to answer second order non-moral questions about the semantics, metaphysics, and epistemology of moral thought and discourse. Moore has continued to exert a powerful influence, and the sixteen essays here represent the most up-to-date work in metaethics after, and (...) in some cases directly inspired by, the work of Moore. (shrink)
In his 1958 seminal paper “Saints and Heroes”, J. O. Urmson argued that the then dominant tripartite deontic scheme of classifying actions as being exclusively either obligatory, or optional in the sense of being morally indifferent, or wrong, ought to be expanded to include the category of the supererogatory. Colloquially, this category includes actions that are “beyond the call of duty” and hence actions that one has no duty or obligation to perform. But it is a controversial category. Some have (...) argued that the concept of supererogation is paradoxical because on one hand, supererogatory actions are supposed to be morally good, indeed morally best, actions. But then if they are morally best, why aren't they morally required, contrary to the assumption that they are morally optional? In short: how can an action that is morally best to perform fail to be what one is morally required to do? The source of this alleged paradox has been dubbed the ‘good-ought tie-up’. In our article, we address this alleged paradox by first making a phenomenological case for the reality of instances of genuine supererogatory actions, and then, by reflecting on the relevant phenomenology, explaining why there is no genuine paradox. Our explanation appeals to the idea that moral reasons can play what we call a merit conferring role. The basic idea is that moral reasons that favor supererogatory actions function to confer merit on the actions they favor—they play a merit conferring role—and can do without also requiring the actions in question. Hence, supererogatory actions can be both good and morally meritorious to perform yet still be morally optional. Recognition of a merit conferring role unties the good-ought tie up, and there are good reasons, independent of helping to resolve the alleged paradox, for recognizing this sort of role that moral reasons may play. (shrink)
According to rationalism regarding the psychology of moral judgment, people’s moral judgments are generally the result of a process of reasoning that relies on moral principles or rules. By contrast, intuitionist models of moral judgment hold that people generally come to have moral judgments about particular cases on the basis of gut-level, emotion-driven intuition, and do so without reliance on reasoning and hence without reliance on moral principles. In recent years the intuitionist model has been forcefully defended by Jonathan Haidt. (...) One important implication of Haidt’s model is that in giving reasons for their moral judgments people tend to confabulate – the reasons they give in attempting to explain their moral judgments are not really operative in producing those judgments. Moral reason-giving on Haidt’s view is generally a matter of post hoc confabulation. Against Haidt, we argue for a version of rationalism that we call ‘morphological rationalism.’ We label our version ‘morphological’ because according to it, the information contained in moral principles is embodied in the standing structure of a typical individual’s cognitive system, and this morphologically embodied information plays a causal role in the generation of particular moral judgments. The manner in which the principles play this role is via ‘proceduralization’ – such principles operate automatically. In contrast to Haidt’s intuitionism, then, our view does not imply that people’s moral reason-giving practices are matters of confabulation. In defense of our view, we appeal to what we call the ‘nonjarring’ character of the phenomenology of making moral judgments and of giving reasons for those judgments. (shrink)
Existence monism is defended against priority monism. Schaffer's arguments for priority monism and against pluralism are reviewed, such as the argument from gunk. The whole does not require parts. Ontological vagueness is impossible. If ordinary objects are in the right ontology then they are vague. So ordinary objects are not included in the right ontology; and hence thought and talk about them cannot be accommodated via fully ontological vindication. Partially ontological vindication is not viable. Semantical theorizing outside the ontology room (...) and semantical theorizing in the doorway. Existence monism is theoretically preferable to priority monism. (shrink)
Moral phenomenology is concerned with the elements of one's moral experiences that are generally available to introspection. Some philosophers argue that one's moral experiences, such as experiencing oneself as being morally obligated to perform some action on some occasion, contain elements that (1) are available to introspection and (2) carry ontological objectivist purportargument from phenomenological introspection.neutrality thesisthe phenomenological data regarding one's moral experiences that is available to introspection is neutral with respect to the issue of whether such experiences carry ontological (...) objectivist purport. (shrink)
In "Milk, Honey, and the Good Life on Moral Twin Earth", David Copp explores some ways in which a defender of synthetic moral naturalism might attempt to get around our Moral Twin Earth argument. Copp nicely brings out the force of our argument, not only through his exposition of it, but through his attempt to defeat it, since his efforts, we think, only help to make manifest the deep difficulties the Moral Twin Earth argument poses for the synthetic moral naturalist.
I advocate a two part view concerning vagueness. On one hand I claim that vagueness is logically incoherent; but on the other hand I claim that vagueness is also a benign, beneficial, and indeed essential feature of human language and thought. I will call this view transvaluationism, a name which seems to me appropriate for several reasons. First, the term suggests that we should move beyond the idea that the successive statements in a sorites sequence can be assigned differing truth (...) values in some logically coherent way that fully respects the nature of vagueness -a way that [1] fully eschews any arbitrarily precise semantic transitions. We should transcend this impossible goal by accepting that vagueness harbors logical incoherence. Second, just as Nietzsche held that one can overcome nihilism by embracing what he called the transvaluation of all values, my position affirms vagueness, rather than despairing in the face of the logical absurdity residing at its very core. This affirmation amounts to a transvaluation of truth values, as far as sorites sequences are concerned. Third, the term 'transvaluationism' has a nice ring to it, especially since one of the principal philosophical approaches to vagueness is called supervaluationism. I will call the first claim of transvaluationism, that vagueness is logically incoherent, the incoherence thesis . I will call the second claim, that vagueness is benign, beneficial, and essential, the legitimacy thesis . The legitimacy thesis, taken by itself, seems overwhelmingly plausible; anyone who denies it assumes a heavy burden of proof. But prima facie, it seems dubious that the legitimacy thesis can be maintained in conjunction with the incoherence thesis. For, there is reason to doubt whether there is any cogent way to embrace the incoherence thesis without thereby becoming mired in what Williamson (1994) calls global nihilism about vagueness -the view that vague terms are empty (i.e., they do not, and cannot, apply to anything). Global nihilism, Williamson argues, has such destructively negative consequences that it does not deserve to be taken seriously -for instance, the consequence that vastly many of our common sense beliefs are false, and the consequence that these beliefs are not even useful (since the constituent terms in 'Common sense beliefs are useful' are vague and hence this statement turns out, given the [2] incoherence thesis, to be false itself). In short, the idea that one can adopt the incoherence thesis and then somehow transcend nihilism might initially seem hopelessly optimistic; transvaluationism would then be an unattainable, chimerical, goal rather than an intelligible and conceptually stable position concerning vagueness. Given certain widely held philosophical views about how language and thought must map onto the world in order for statements and the beliefs they express to be true -views that fall appropriately under the label 'referential semantics' -transvaluationism probably is a chimerical goal.. (shrink)
Abstract We propose a metaethical view that combines the cognitivist idea that moral judgments are genuine beliefs and moral utterances express genuine assertions with the idea that such beliefs and utterances are nondescriptive in their overall content. This sort of view has not been recognized among the standard metaethical options because it is generally assumed that all genuine beliefs and assertions must have descriptive content. We challenge this assumption and thereby open up conceptual space for a new kind of metaethical (...) view. In developing our brand of nondescriptivist cognitivism we do the following: (1) articulate a conception of belief (and assertion) that does not require the overall declarative content of beliefs (and assertions) to be descriptive content; (2) make a case for the independent plausibility of this conception of belief and assertion; and (3) argue that our view, formulated in a way that draws upon the proposed conception of belief, has significant comparative advantages over descriptivist forms of cognitivism. (shrink)
In Chapters 4 and 5 of his 1998 book From Metaphysics to Ethics: A Defence of Conceptual Analysis, Frank Jackson propounds and defends a form of moral realism that he calls both ‘moral functionalism’ and ‘analytical descriptivism’. Here we argue that this metaethical position, which we will henceforth call ‘analytical moral functionalism’, is untenable. We do so by applying a generic thought-experimental deconstructive recipe that we have used before against other views that posit moral properties and identify them with certain (...) natural properties, a recipe that we believe is applicable to virtually any metaphysically naturalist version of moral realism. The recipe deploys a scenario we call Moral Twin Earth. (shrink)
Moral phenomenology is (roughly) the study of those features of occurrent mental states with moral significance which are accessible through direct introspection, whether or not such states possess phenomenal character – a what-it-is-likeness. In this paper, as the title indicates, we introduce and make prefatory remarks about moral phenomenology and its significance for ethics. After providing a brief taxonomy of types of moral experience, we proceed to consider questions about the commonality within and distinctiveness of such experiences, with an eye (...) on some of the main philosophical issues in ethics and how moral phenomenology might be brought to bear on them. In discussing such matters, we consider some of the doubts about moral phenomenology and its value to ethics that are brought up by Walter Sinnott-Armstrong and Michael Gill in their contributions to this issue. (shrink)
Alvin Goldman’s contributions to contemporary epistemology are impressive—few epistemologists have provided others so many occasions for reflecting on the fundamental character of their discipline and its concepts. His work has informed the way epistemological questions have changed (and remained consistent) over the last two decades. We (the authors of this paper) can perhaps best suggest our indebtedness by noting that there is probably no paper on epistemology that either of us individually or jointly have produced that does not in its (...) notes and references bear clear testimony to the influence of Professor Goldman’s arguments. The present paper is no exception (and this would be a particularly inapt place to break with our tradition of indebtedness). Professor Goldman has produced a series of discussions that we find particularly important for coming to terms with the venerable idea that there may be truths that can be known a priori (Goldman 1992a, 1992b, 1999). We do not altogether follow his lead, while he draws on the idea that a priori justification has something to do with innateness or processess, we prefer to accentuate the idea that a priori justification turns on a conceptually grounded truths and access via acquired conceptual competence (at least in many significant philosophical cases). Still, in developing our understanding we have been aided by much that Professor Goldman says regarding concepts, conceptual competence, and related psychological processes. The influences should become progressively clear, particularly in the later sections of this paper. What would it take for there to be a priori knowledge or justification? We can begin by reflecting on a widely agreed on answer to this question—one that purports to identify something that would at least be adequate for a priori justification. The answer will then serve as one anchor for the present investigation, a bit of shared ground on which empiricists and rationalists can, and typically do, agree.. (shrink)
Morphological content is information that is implicitly embodied in the standing structure of a cognitive system and is automatically accommodated during cognitive processing without first becoming explicit in consciousness. We maintain that much belief-formation in human cognition is essentially morphological : i.e., it draws heavily on large amounts of morphological content, and must do so in order to tractably accommodate the holistic evidential relevance of background information possessed by the cognitive agent. We also advocate a form of experiential evidentialism concerning (...) epistemic justification—roughly, the view that the justification-status of an agent’s beliefs is fully determined by the character of the agent’s conscious experience. We have previously defended both the thesis that much belief-formation is essentially morphological, and also a version of evidentialism. Here we explain how experiential evidentialism can be smoothly and plausibly combined with the thesis that much of the cognitive processing that generates justified beliefs is essentially morphological. The leading idea is this: even though epistemically relevant morphological content does not become explicit in consciousness during the process of belief-generation, nevertheless such content does affect the overall character of conscious experience in an epistemically significant way: it is implicit in conscious experience, and is implicitly appreciated by the experiencing agent. (shrink)
Causal compatibilism claims that even though physics is causally closed, and even though mental properties are multiply realizable and are not identical to physical causal properties, mental properties are causal properties nonetheless. This position asserts that there is genuine causation at multiple descriptive/ontological levels; physics-level causal claims are not really incompatible with mentalistic causal claims. I articulate and defend a version of causal compatibilism that incorporates three key contentions. First, causation crucially involves robust patterns of counterfactual dependence among properties.Second, often (...) several distinct such patterns, all subsuming a single phenomenon, exist at different descriptive/ontological levels. Third, the concept of causation is governed by an implicit contextual parameter that normally determines a specific descriptive/ontological level as the contextually relevant level, for the context-sensitive semantic evaluation of causal statements. (shrink)
Is conceptual relativity a genuine phenomenon? If so, how is it properly understood? And if it does occur, does it undermine metaphysical realism? These are the questions we propose to address. We will argue that conceptual relativity is indeed a genuine phenomenon, albeit an extremely puzzling one. We will offer an account of it. And we will argue that it is entirely compatible with metaphysical realism. Metaphysical realism is the view that there is a world of objects and properties that (...) is independent of our thought and discourse (including our schemes of concepts) about such a world. Hilary Putnam, a former proponent of metaphysical realism, later gave it up largely because of the alleged phenomenon that he himself has given the label ‘conceptual relativity’. One of the key ideas of conceptual relativity is that certain concepts—including such fundamental concepts as object, entity, and existence—have a multiplicity of different and incompatible uses (Putnam 1987, p. 19; 1988, pp. 110 14). According to Putnam, once we recognize the phenomenon of conceptual relativity we must reject metaphysical realism: The suggestion . . . is that what is (by commonsense standards) the same situation can be described in many different ways, depending on how we use the words. The situation does not itself legislate how words like “object,” “entity,” and “exist” must be used. What is wrong with the notion of objects existing “independently” of conceptual schemes is that there are no standards for the use of even the logical notions apart from conceptual choices.” (Putnam 1988, p. 114) Putnam’s intriguing reasoning in this passage is difficult to evaluate directly, because conceptual [1] relativity is philosophically perplexing and in general is not well understood. In this paper we propose a construal of conceptual relativity that clarifies it considerably and explains how it is possible despite its initial air of paradox. We then draw upon this construal to explain why, contrary to Putnam and others, conceptual relativity does not conflict with metaphysical realism, but in fact comports well with it.. (shrink)
I maintain, in defending “thirdism,” that Sleeping Beauty should do Bayesian updating after assigning the “preliminary probability” 1/4 to the statement S: “Today is Tuesday and the coin flip is heads.” (This preliminary probability obtains relative to a specific proper subset I of her available information.) Pust objects that her preliminary probability for S is really zero, because she could not be in an epistemic situation in which S is true. I reply that the impossibility of being in such an (...) epistemic situation is irrelevant, because relative to I, statement S nonetheless has degree of evidential support 1/4. (shrink)
The hypothesis of the mental state-causation of behavior asserts that the behaviors we classify as actions are caused by certain mental states. A principal reason often given for trying to secure the truth of the MSC hypothesis is that doing so is allegedly required to vindicate our belief in our own agency. I argue that the project of vindicating agency needs to be seriously reconceived, as does the relation between this project and the MSC hypothesis. Vindication requires addressing what I (...) call the agent-exclusion problem: the prima facie incompatibility between the intentional content of agentive experience and certain metaphysical hypotheses often espoused in philosophy. (shrink)
Within cognitive science, mental processing is often construed as computation over mental representations—i.e., as the manipulation and transformation of mental representations in accordance with rules of the kind expressible in the form of a computer program. This foundational approach has encountered a long-standing, persistently recalcitrant, problem often called the frame problem; it is sometimes called the relevance problem. In this paper we describe the frame problem and certain of its apparent morals concerning human cognition, and we argue that these morals (...) have significant import regarding both the nature of moral normativity and the human capacity for mastering moral normativity. The morals of the frame problem bode well, we argue, for the claim that moral normativity is not fully systematizable by exceptionless general principles, and for the correlative claim that such systematizability is not required in order for humans to master moral normativity. (shrink)
How should the metaphysical hypothesis of materialism be formulated? What strategies look promising for defending this hypothesis? How good are the prospects for its successful defense, especially in light of the infamous "hard problem" of phenomenal consciousness? I will say something about each of these questions.
In his 2013 Theoria article, “Unreliable Intuitions: A New Reply to the Moral Twin-Earth Argument,” Jorn Sonderholm attempts to undermine our moral twin earth argument against Richard Boyd's moral semantics by debunking the semantic intuitions that are prompted by reflection on the thought experiment featured in the MTE argument. We divide our reply into three main sections. In section 1, we briefly review Boyd's moral semantics and our MTE argument against this view. In section 2, we set forth what we (...) take to be Sonderholm's master debunking argument, along with his proposed Boydian explanation of the semantic intuitions he seeks to debunk. Then in section 3, we mount our defence of the semantic intuitions under scrutiny, arguing on abductive grounds that, contrary to Sonderholm, the semantic intuitions generated by reflection on MTE scenarios are to be trusted in evaluating the plausibility of Boydian moral semantics. Section 4 is our summary and conclusion. (shrink)
Inspired and informed by the work of Russ Hurlburt and Eric Schwitzgebel in their 'Describing Inner Experience', we do two things in this commentary. First, we discuss the degree of reliability that introspective methods might be expected to deliver across a range of types of experience. Second, we explore the phenomenology of agency as it bears on the topic of free will. We pose a number of poten-tial problems for attempts to use introspective methods to answer var-ious questions about the (...) phenomenology of free-will experience -- questions such as this: does such experience have metaphysical-liber-tarian satisfaction conditions? We then discuss the prospects for over-coming some of these problems via approaches such as Hurlburt's DES methodology, the so-called 'talk aloud' protocol, and forms of abduction that combine introspection with non-introspection-based forms of evidence. (shrink)
The authors argue in favor of the “nonconciliation” (or “steadfast”) position concerning the problem of peer disagreement. Throughout the paper they place heavy emphasis on matters of phenomenology—on how things seem epistemically with respect to the net import of one’s available evidence vis-à-vis the disputed claim p, and on how such phenomenology is affected by the awareness that an interlocutor whom one initially regards as an epistemic peer disagrees with oneself about p. Central to the argument is a nested goal/sub-goal (...) hierarchy that the authors claim is inherent to the structure of epistemically responsible belief-formation: pursuing true beliefs by pursuing beliefs that are objectively likely given one’s total available evidence; pursuing this sub-goal by pursuing beliefs that are likely true (given that evidence) relative to one’s own deep epistemic sensibility; and pursuing this sub-sub-goal by forming beliefs in accordance with one’s own all-in, ultima facie, epistemic seemings. (shrink)
We propose an approach to epistemic justification that incorporates elements of both reliabilism and evidentialism, while also transforming these elements in significant ways. After briefly describing and motivating the non-standard version of reliabilism that Henderson and Horgan call “transglobal” reliabilism, we harness some of Henderson and Horgan’s conceptual machinery to provide a non-reliabilist account of propositional justification (i.e., evidential support). We then invoke this account, together with the notion of a transglobally reliable belief-forming process, to give an account of doxastic (...) justification. (shrink)
The philosophical account of vagueness I call "transvaluationism" makes three fundamental claims. First, vagueness is logically incoherent in a certain way: it essentially involves mutually unsatisfiable requirements that govern vague language, vague thought-content, and putative vague objects and properties. Second, vagueness in language and thought (i.e., semantic vagueness) is a genuine phenomenon despite possessing this form of incoherence—and is viable, legitimate, and indeed indispensable. Third, vagueness as a feature of objects, properties, or relations (i.e., ontological vagueness) is impossible, because of (...) the mutually unsatisfiable conditions that such putative items would have to meet. In this paper I set forth the core claims of transvaluationism in a way that acknowledges and explicitly addresses a challenging critique by Timothy Williamson of my prior attempts to articulate and defend this approach to vagueness. I sketch my favored approach to truth and ontological commitment, and I explain how it accommodates the impossibility of ontological vagueness. I argue that any approach to the logic and semantics of vagueness that both (i) eschews epistemicism and (ii) thoroughly avoids positing any arbitrary sharp boundaries (either first-order or higher-order) will have to be not an alternative to transvaluationism but an implementation of it. I sketch my reasons for repudiating epistemicism. I briefly describe my current thinking about how to accommodate intentional mental properties with vague content within an ontology that eschews ontological vagueness. And I revisit the idea, which played a key role in my earlier articulations of transvaluationism, that moral conflicts provide an illuminating model for understanding vagueness. (shrink)
We present a new argument for the claim that in the Sleeping Beauty problem, the probability that the coin comes up heads is 1/3. Our argument depends on a principle for the updating of probabilities that we call ‘generalized conditionalization’, and on a species of generalized conditionalization we call ‘synchronic conditionalization on old information’. We set forth a rationale for the legitimacy of generalized conditionalization, and we explain why our new argument for thirdism is immune to two attacks that Pust (...) (Synthese 160:97–101, 2008) has leveled at other arguments for thirdism. (shrink)
Moral judgments are typically experienced as being categorically authoritative – i.e. as having a prescriptive force that is motivationally gripping independently of both conventional norms and one's pre-existing desires, and justificationally trumps both conventional norms and one's pre-existing desires. We argue that this key feature is best accommodated by the meta-ethical position we call ‘cognitivist expressivism’, which construes moral judgments as sui generis psychological states whose distinctive phenomenological character includes categorical authoritativeness. Traditional versions of expressivism cannot easily accommodate the justificationally (...) trumping aspect of categorical authoritativeness, because they construe moral judgments as fundamentally desire-like. Moral realism cannot easily accommodate the aspect of inherent motivational grip, because realism construes moral judgments as a species of factual belief. (shrink)