In this paper I propose to examine the cognitive status of mystical experience. There are, I think, three distinct but overlapping sorts of religious experience. In the first place, there are two kinds of mystical experience. The extrovertive or nature mystic identifies himself with a world which is both transfigured and one. The introvertive mystic withdraws from the world and, after stripping the mind of concepts and images, experiences union with something which can be described as an undifferentiated unity. Introvertive (...) mysticism is a more important phenomenon than extrovertive mysticism. Numinous experiences are complex experiences involving dread, awe, wonder, and fascination. One finds oneself confronted with something which is radically unlike ordinary objects. Before its overwhelming majesty and power, one is nothing but dust and ashes. In contrasting oneself with its uncanny beauty and goodness, one experiences one's own uncleanness and ugliness. The experiences bound up with the devotional life of the ordinary believer are also religious in character. Nevertheless these more ordinary experiences should, I think, be distinguished both from numinous experiences and from mystical experiences, for they do not appear to involve the sense of immediate presence which characterises the latter. (shrink)
In the movie Regarding Henry, the main character, Henry Turner, is a lawyer who suffers brain damage as a result of being shot during a robbery. Before being wounded, the Old Henry Turner had been a successful lawyer, admired as a fierce competitor and well-known for his killer instinct. As a result of the injury to his brain, the New Henry Turner loses the personality traits that had made the Old Henry such a formidable adversary.
In a series of important and influential books, Wilfred Cantwell Smith has convincingly argued that religious traditions are misunderstood if one does not grasp the faith which they express, that these traditions are not static but fluid, and that as a result of greater knowledge and increased contact between members of different traditions, we have entered a period in which it is no longer possible for the traditions to develop in relative isolation. This paper is devoted to an important aspect (...) of Smith's thought – his distinction between faith and propositional belief. (shrink)
The falsifiability of theistic assertions no longer appears to be the burning issue it once was, and perhaps this is all to the good. For one thing, it was never entirely clear just what demand was being made of the theist. In this paper I shall not discuss the nature or legitimacy of the falsification requirement as applied to theistic assertions. Instead I shall argue that some of the reasons which have been offered to show that these assertions are not (...) falsifiable are by no means conclusive. Since the most plausible bit of anti-theistic evidence is the existence of evil, it would seem to be legitimate for us to devote our attention to arguments which are designed to show that the theist does not allow the presence of evil to count against his claims. (shrink)
"This book, one of the most frequently cited works on Martin Heidegger in any language, belongs on any short list of classic studies of Continental philosophy. William J. Richardson explores the famous turn in Heidegger's thought after Being in Time and demonstrates how this transformation was radical without amounting to a simple contradiction of his earlier views." "In a full account of the evolution of Heidegger's work as a whole, Richardson provides a detailed, systematic, and illuminating account of both (...) divergences and fundamental continuities in Heidegger's philosophy, especially in light of recently published works. He demonstrates that the "thinking" of Being for the later Heidegger has exactly the same configuration as the radical phenomenology of the early Heidegger, once he has passed through the "turning" of his way." Including as a preface the letter that Heidegger wrote to Richardson and a new writer's preface and epilogue, the new edition of this valuable guide will be an essential resource for students and scholars for many years to come. (shrink)
Inscrutability arguments threaten to reduce interpretationist metasemantic theories to absurdity. Can we find some way to block the arguments? A highly influential proposal in this regard is David Lewis’ ‘ eligibility ’ response: some theories are better than others, not because they fit the data better, but because they are framed in terms of more natural properties. The purposes of this paper are to outline the nature of the eligibility proposal, making the case that it is not ad hoc, but (...) instead flows naturally from three independently motivated elements; and to show that severe limitations afflict the proposal. In conclusion, I pick out the element of the eligibility response that is responsible for the limitations: future work in this area should therefore concentrate on amending this aspect of the overall theory. (shrink)
Might it be that world itself, independently of what we know about it or how we represent it, is metaphysically indeterminate? This article tackles in turn a series of questions: In what sorts of cases might we posit metaphysical indeterminacy? What is it for a given case of indefiniteness to be 'metaphysical'? How does the phenomenon relate to 'ontic vagueness', the existence of 'vague objects', 'de re indeterminacy' and the like? How might the logic work? Are there reasons for postulating (...) this distinctive sort of indefiniteness? Conversely, are there reasons for denying that there is indefiniteness of this sort? (shrink)
Lewis (1973) gave a short argument against conditional excluded middle, based on his treatment of ‘might’ counterfactuals. Bennett (2003), with much of the recent literature, gives an alternative take on ‘might’ counterfactuals. But Bennett claims the might-argument against CEM still goes through. This turns on a specific claim I call Bennett’s Hypothesis. I argue that independently of issues to do with the proper analysis of might-counterfactuals, Bennett’s Hypothesis is inconsistent with CEM. But Bennett’s Hypothesis is independently objectionable, so we should (...) resolve this tension by dropping the Hypothesis, not by dropping CEM. (shrink)
What implications, if any, does evolutionary biology have for metaethics? Many believe that our evolutionary background supports a deflationary metaethics, providing a basis at least for debunking ethical realism. Some arguments for this conclusion appeal to claims about the etiology of the mental capacities we employ in ethical judgment, while others appeal to the etiology of the content of our moral beliefs. In both cases the debunkers’ claim is that the causal roles played by evolutionary factors raise deep epistemic problems (...) for realism: if ethical truths are objective or independent of our evaluative attitudes, as realists maintain, then we lose our justification for our ethical beliefs once we become aware of the evolutionary shaping of our ethical capacities or beliefs, which would not have disposed us reliably to track independent ethical truths; realism, they claim, thus saddles us with ethical skepticism. I distinguish and spell out various evolutionary debunking arguments along these lines and argue that they all fail: the capacity etiology argument fails to raise any special or serious problem for realism, and the content etiology arguments all rely on strong explanatory claims about our moral beliefs that are simply not supported by the science unless it is supplemented by philosophical claims that just beg the question against realism from the start. While the various debunking arguments do bring out some interesting commitments of ethical realism, and even raise some good challenges as realists develop positive moral epistemologies, they fall far short of their debunking ambitions. (shrink)
"We hold these truths to be self-evident..." So begins the U.S. Declaration of Independence. What follows those words is a ringing endorsement of universal rights, but it is far from self-evident. Why did the authors claim that it was? William Talbott suggests that they were trapped by a presupposition of Enlightenment philosophy: That there was only one way to rationally justify universal truths, by proving them from self-evident premises. With the benefit of hindsight, it is clear that the authors (...) of the U.S. Declaration had no infallible source of moral truth. For example, many of the authors of the Declaration of Independence endorsed slavery. The wrongness of slavery was not self-evident; it was a moral discovery. In this book, William Talbott builds on the work of John Rawls, Jurgen Habermas, J.S. Mill, Amartya Sen, and Henry Shue to explain how, over the course of history, human beings have learned how to adopt a distinctively moral point of view from which it is possible to make universal, though not infallible, judgments of right and wrong. He explains how this distinctively moral point of view has led to the discovery of the moral importance of nine basic rights. Undoubtedly, the most controversial issue raised by the claim of universal rights is the issue of moral relativism. How can the advocate of universal rights avoid being a moral imperialist? In this book, Talbott shows how to defend basic individual rights from a universal moral point of view that is neither imperialistic nor relativistic. Talbott avoids moral imperialism by insisting that all of us, himself included, have moral blindspots and that we usually depend on others to help us to identify those blindspots. Talbott's book speaks to not only debates on human rights but to broader issues of moral and cultural relativism, and will interest a broad range of readers. (shrink)
Worlds where things divide forever ("gunk" worlds) are apparently conceivable. The conceivability of such scenarios has been used as an argument against "nihilist" or "near-nihilist" answers to the special composition question. I argue that the mereological nihilist has the resources to explain away the illusion that gunk is possible.
A critical survey of some attempts to define ‘computer’, beginning with some informal ones, then critically evaluating those of three philosophers, and concluding with an examination of whether the brain and the universe are computers.
Revisionary theories of logic or truth require revisionary theories of mind. This essay outlines nonclassically based theories of rational belief, desire, and decision making, singling out the supervaluational family for special attention. To see these nonclassical theories of mind in action, this essay examines a debate between David Lewis and Derek Parfit over what matters in survival. Lewis argued that indeterminacy in personal identity allows caring about psychological connectedness and caring about personal identity to amount to the same thing. The (...) essay argues that Lewis's treatment of two of Parfit's puzzle cases—degreed survival and fission—presuppose different nonclassical treatments of belief and desire. (shrink)
I begin by distinguishing two general approaches to metaethics and ontology. One in effect puts our experience as engaged ethical agents on hold while independent metaphysical and epistemological inquiries, operating by their own lights, deliver metaethical verdicts on acceptable interpretations of our ethical lives; the other instead keeps engaged ethical experience in focus and allows our reflective interpretation of it to shape our metaphysical and epistemological views, including our ontology. While the former approach often leads to deflationary views, the latter (...) may lead us to enrich our metaethical picture as needed to capture robust objectivity and categorical normative authority for ethics. Assuming, as I have argued elsewhere, that this requires positing irreducibly evaluative or normative properties and facts, the question I take up here is what ontological implications this has. I argue against quietist non-naturalist views, which maintain that positing such properties and facts either has no ontological implications or has only domain-specific ontological implications that likewise imply nothing about what the world contains. Against these views, I advocate a worldly, dual-aspect view, locating irreducibly evaluative or normative properties as features of relevant worldly things. But while I have previously defended this view as a form of non-naturalism, I here explore the possibility of instead seeing it as a new, more expansive form of naturalism—what might be called “Non-Scientistic Naturalism”—inspired by parallel attempts in the philosophy of mind to accommodate irreducibly phenomenal properties within a more expansive physicalism. (shrink)
This essay continues my investigation of `syntactic semantics': the theory that, pace Searle's Chinese-Room Argument, syntax does suffice for semantics (in particular, for the semantics needed for a computational cognitive theory of natural-language understanding). Here, I argue that syntactic semantics (which is internal and first-person) is what has been called a conceptual-role semantics: The meaning of any expression is the role that it plays in the complete system of expressions. Such a `narrow', conceptual-role semantics is the appropriate sort of semantics (...) to account (from an `internal', or first-person perspective) for how a cognitive agent understands language. Some have argued for the primacy of external, or `wide', semantics, while others have argued for a two-factor analysis. But, although two factors can be specifiedâ-one internal and first-person, the other only specifiable in an external, third-person wayâ-only the internal, first-person one is needed for understanding how someone understands. A truth-conditional semantics can still be provided, but only from a third-person perspective. (shrink)
The proper treatment of computationalism, as the thesis that cognition is computable, is presented and defended. Some arguments of James H. Fetzer against computationalism are examined and found wanting, and his positive theory of minds as semiotic systems is shown to be consistent with computationalism. An objection is raised to an argument of Selmer Bringsjord against one strand of computationalism, namely, that Turing-Test± passing artifacts are persons, it is argued that, whether or not this objection holds, such artifacts will inevitably (...) be persons. (shrink)
The author identifies the structure of Sharon Street's skeptical challenge to non-naturalist, normative epistemic realism as an argument that NNER is liable to reliability defeat and then argues that Street's argument fails, because it itself is subject to reliability defeat. As the author reconstructs Street's argument, it is an argument that the normative epistemic judgments of the realist could only be probabilistically sensitive to normative epistemic truths by sheer chance. The author then recaps Street's own naturalist translation of normative epistemic (...) judgments into purely descriptive, contingent probability statements, and argues that, on her own terms, the reasoning that leads her to rationally believe in evolutionary theory could only be probabilistically sensitive to the relevant purely descriptive, contingent probabilities by sheer chance. The author's argument is addressed to Street, but it applies to all evolutionary naturalist accounts of epistemic rationality. The author explains how his argument differs from Plantinga's Evolutionary Argument Against Naturalism and shows how it avoids the objections to Plantinga's EAAN. The author closes with the outline of an explanation of how evolution could have made human reasoning probabilistically sensitive to metaphysically necessary normative epistemic standards, even though those standards did not exert and, indeed, could not have exerted any kind of causal influence on the evolutionary process. (shrink)
This paper examines two puzzles of indeterminacy. The first puzzle concerns the hypothesis that there is a unified phenomenon of indeterminacy. How are we to reconcile this with the apparent diversity of reactions that indeterminacy prompts? The second puzzle focuses narrowly on borderline cases of vague predicates. How are we to account for the lack of theoretical consensus about what the proper reaction to borderline cases is? I suggest (building on work by Maudlin) that the characteristic feature of indeterminacy is (...) alethic normative silence, and use this to explain both plurality and lack of consensus. (shrink)
Ethical realists hold that our ethical concepts, thoughts, and claims are in the business of representing ethical reality, by representing evaluative or normative properties and facts as aspects of reality, and that such representations are at least sometimes accurate. Non-naturalist realists add the further claim that ethical properties and facts are ultimately non-natural, though they are nonetheless worldly. My aim is threefold: to elucidate the sort of representation involved in ethical evaluation on realist views; to clarify what exactly is represented (...) and how non-naturalism comes into the picture for non-naturalists; and to defend worldly non-naturalism against some objections. The first question addressed is how we should model evaluation on any realist view, which should in turn guide the identification of which properties and facts are credibly regarded as ‘evaluative’ ones. Then the question is: what role might non-natural properties and facts play, and how are they related to what is represented in ethical evaluation? Once that is clear, we will be in a position to answer certain objections to non-naturalist realism from Jackson, Gibbard, Bedke, and Dreier. I argue that the objections all mischaracterize the role played by non-natural properties and facts on plausible versions of non-naturalist realism. (shrink)
Moral exemplar studies of computer and engineering professionals have led ethics teachers to expand their pedagogical aims beyond moral reasoning to include the skills of moral expertise. This paper frames this expanded moral curriculum in a psychologically informed virtue ethics. Moral psychology provides a description of character distributed across personality traits, integration of moral value into the self system, and moral skill sets. All of these elements play out on the stage of a social surround called a moral ecology. Expanding (...) the practical and professional curriculum to cover the skills and competencies of moral expertise converts the classroom into a laboratory where students practice moral expertise under the guidance of their teachers. The good news is that this expanded pedagogical approach can be realized without revolutionizing existing methods of teaching ethics. What is required, instead, is a redeployment of existing pedagogical tools such as cases, professional codes, decision-making frameworks, and ethics tests. This essay begins with a summary of virtue ethics and informs this with recent research in moral psychology. After identifying pedagogical means for teaching ethics, it shows how these can be redeployed to meet a broader, skills based agenda. Finally, short module profiles offer concrete examples of the shape this redeployed pedagogical agenda would take in the practical and professional ethics classroom. (shrink)
Syntactic semantics is a holistic, conceptual-role-semantic theory of how computers can think. But Fodor and Lepore have mounted a sustained attack on holistic semantic theories. However, their major problem with holism (that, if holism is true, then no two people can understand each other) can be fixed by means of negotiating meanings. Syntactic semantics and Fodor and Leporeâs objections to holism are outlined; the nature of communication, miscommunication, and negotiation is discussed; Brunerâs ideas about the negotiation of meaning are explored; (...) and some observations on a problem for knowledge representation in AI raised by Winston are presented. (shrink)
“Contextual” vocabulary acquisition is the active, deliberate acquisition of a meaning for a word in a text by reasoning from textual clues and prior knowledge, including language knowledge and hypotheses developed from prior encounters with the word, but without external sources of help such as dictionaries or people. But what is “context”? Is it just the surrounding text? Does it include the reader’s background knowledge? I argue that the appropriate context for contextual vocabulary acquisition is the reader’s “internalization” of the (...) text “integrated” into the reader’s “prior” knowledge via belief revision. (shrink)
A computer can come to understand natural language the same way Helen Keller did: by using “syntactic semantics”—a theory of how syntax can suffice for semantics, i.e., how semantics for natural language can be provided by means of computational symbol manipulation. This essay considers real-life approximations of Chinese Rooms, focusing on Helen Keller’s experiences growing up deaf and blind, locked in a sort of Chinese Room yet learning how to communicate with the outside world. Using the SNePS computational knowledge-representation system, (...) the essay analyzes Keller’s belief that learning that “everything has a name” was the key to her success, enabling her to “partition” her mental concepts into mental representations of: words, objects, and the naming relations between them. It next looks at Herbert Terrace’s theory of naming, which is akin to Keller’s, and which only humans are supposed to be capable of. The essay suggests that computers at least, and perhaps non-human primates, are also capable of this kind of naming. (shrink)
We present a computational analysis of de re, de dicto, and de se belief and knowledge reports. Our analysis solves a problem first observed by Hector-Neri Castañeda, namely, that the simple rule -/- `(A knows that P) implies P' -/- apparently does not hold if P contains a quasi-indexical. We present a single rule, in the context of a knowledge-representation and reasoning system, that holds for all P, including those containing quasi-indexicals. In so doing, we explore the difference between reasoning (...) in a public communication language and in a knowledge-representation language, we demonstrate the importance of representing proper names explicitly, and we provide support for the necessity of considering sentences in the context of extended discourse (for example, written narrative) in order to fully capture certain features of their semantics. (shrink)
This project continues our interdisciplinary research into computational and cognitive aspects of narrative comprehension. Our ultimate goal is the development of a computational theory of how humans understand narrative texts. The theory will be informed by joint research from the viewpoints of linguistics, cognitive psychology, the study of language acquisition, literary theory, geography, philosophy, and artiﬁcial intelligence. The linguists, literary theorists, and geographers in our group are developing theories of narrative language and spatial understanding that are being tested by the (...) cognitive psychologists and language researchers in our group, and a computational model of a reader of narrative text is being developed by the AI researchers, based in part on these theories and results and in part on research on knowledge representation and reasoning. This proposal describes the knowledge-representation and natural-language-processing issues involved in the computational implementation of the theory; discusses a contrast between communicative and narrative uses of language and of the relation of the narrative text to the story world it describes; investigates linguistic, literary, and hermeneutic dimensions of our research; presents a computational investigation of subjective sentences and reference in narrative; studies children’s acquisition of the ability to take third-person perspective in their own storytelling; describes the psychological validation of various linguistic devices; and examines how readers develop an understanding of the geographical space of a story. This report is a longer version of a project description submitted to NSF. This document, produced in May 2007, is a L ATEX version of Technical Report 89-07 (Buffalo: SUNY Buffalo Department of Computer Science, August 1989), with slightly.. (shrink)
This essay re-examines Meinong's "Über Gegenstandstheorie" and undertakes a clarification and revision of it that is faithful to Meinong, overcomes the various objections to his theory, and is capable of offering solutions to various problems in philosophy of mind and philosophy of language. I then turn to a discussion of a historically and technically interesting Russell-style paradox (now known as "Clark's Paradox") that arises in the modified theory. I also examine the alternative Meinong-inspired theories of Hector-Neri Castañeda and Terence Parsons.
My argument will proceed as follows. I will first sketch out the broad internalist case for pitching its normative account of sport in the abstract manner that following Dworkin?s lead in the philosophy of law its adherents insist upon. I will next show that the normative deficiencies in social conventions broad internalists uncover are indeed telling but misplaced since they hold only for what David Lewis famously called ?coordinating? conventions. I will then distinguish coordinating conventions from deep ones and make (...) my case not only for the normative salience of deep conventions but for their normative superiority over the abstract normative principles broad internalists champion. (shrink)
In a recent article in this journal, Del Mar offered two main criticisms of Marmor’s account of social conventions. The first took issue with Marmor’s claim that the constitutive rules of games and kindred social practices determine in an objective way their central aims and values; the second charged Marmor with scanting the historical context in which conventions do their important normative work in shaping the goals of games. I argue that Del Mar’s criticism of Marmor’s account of the normative (...) centrality and force of constitutive rules in games and the like fails, but that his criticism faulting Marmor for giving short shrift to the normative work conventions do in these social practices is on the mark. So while I reject Del Mar’s claim that a closer look at the social and historical contexts in which the conventions of games and the like carry out their normative tasks undermines Marmor’s account of constitutive rules, I think his argument that conventions play a far more important, even if supplementary, role in shaping our understanding of and participation in these social practices than Marmor allows is persuasive. (shrink)
This book examines the effects of the market mechanism on economies and societies. It argues that perfect competition has a tendency to promote adulteration of products and a general deterioration in quality. It also contends that it is very difficult for competitive firms to behave in socially desirable ways - being kind to the environment, contributing to worthy social programmes, handling redundancy humanely. The book goes on to propose ways in which these flaws might be remedied without subverting the market (...) mechanism. (shrink)
This essay considers what it means to understand natural language and whether a computer running an artificial-intelligence program designed to understand natural language does in fact do so. It is argued that a certain kind of semantics is needed to understand natural language, that this kind of semantics is mere symbol manipulation (i.e., syntax), and that, hence, it is available to AI systems. Recent arguments by Searle and Dretske to the effect that computers cannot understand natural language are discussed, and (...) a prototype natural-language-understanding system is presented as an illustration. (shrink)
The article begins with a review of the structural differences between act consequentialist theories and human rights theories, as illustrated by Amartya Sen's paradox of the Paretian liberal and Robert Nozick's utilitarianism of rights. It discusses attempts to resolve those structural differences by moving to a second-order or indirect consequentialism, illustrated by J.S. Mill and Derek Parfit. It presents consequentialist (though not utilitarian) interpretations of the contractualist theories of Jürgen Habermas and the early John Rawls (Theory of Justice) and of (...) the capability theories of Sen and Martha Nussbaum. It also discusses two roles that well-being or a surrogate for well-being typically plays in theories of human rights: (a) well-being plays a role in the justification of at least some exceptions to human rights principles; and (b) some human rights seem to be best understood as rights to some level of well-being or expected well-being (or of a surrogate for one of them). It reviews two consequentialist challenges to the moral adequacy of non-consequentialist accounts of human rights, one based on a duty to relieve suffering and the other generated by Parfit's Non-Identity Problem, and concludes with a contrast between two ways of looking at the history of the development and implementation of human rights conventions and laws. Video abstract (click to view). (shrink)
In this reply to James H. Fetzer’s “Minds and Machines: Limits to Simulations of Thought and Action”, I argue that computationalism should not be the view that (human) cognition is computation, but that it should be the view that cognition (simpliciter) is computable. It follows that computationalism can be true even if (human) cognition is not the result of computations in the brain. I also argue that, if semiotic systems are systems that interpret signs, then both humans and computers are (...) semiotic systems. Finally, I suggest that minds can be considered as virtual machines implemented in certain semiotic systems, primarily the brain, but also AI computers. In doing so, I take issue with Fetzer’s arguments to the contrary. (shrink)
Between the opposing claims of reason and religious subjectivity may be a middle ground, William J. Wainwright argues. His book is a philosophical reflection on the role of emotion in guiding reason. There is evidence, he contends, that reason functions properly only when informed by a rightly disposed heart. The idea of passional reason, so rarely discussed today, once dominated religious reflection, and Wainwright pursues it through the writings of three of its past proponents: Jonathan Edwards, John Henry Newman, (...) and William James. He focuses on Edwards, whose work typifies the Christian perspective on religious reasoning and the heart. Then, in his discussion of Newman and James, Wainwright shows how the emotions participate in non-religious reasoning. Finally he takes up the challenges most often posed to notions of passional reason: that such views justify irrationality and wishful thinking, that they can't be defended without circularity, and that they lead to relativism. His response to these charges culminates in an eloquent and persuasive defense of the claim that reason functions best when influenced by the appropriate emotions, feelings, and intuitions. (shrink)