Many philosophical naturalists eschew analysis in favor of discovering metaphysical truths from the a posteriori, contending that analysis does not lead to philosophical insight. A countercurrent to this approach seeks to reconcile a certain account of conceptual analysis with philosophical naturalism; prominent and influential proponents of this methodology include the late David Lewis, Frank Jackson, Michael Smith, Philip Pettit, and David Armstrong. Naturalistic analysis is a tool for locating in the scientifically given world objects and properties we quantify over in (...) everyday discourse. This collection gathers work from a range of prominent philosophers who are working within this tradition, offering important new work as well as critical evaluations of the methodology. Its centerpiece is an important posthumous paper by David Lewis, "Ramseyan Humility," published here for the first time. The contributors first address issues of philosophy of mind, semantics, and the new methodology's a priori character, then turn to matters of metaphysics, and finally consider problems regarding normativity. Conceptual Analysis and Philosophical Naturalism is one of the first efforts to apply this approach to such a wide range of philosophical issues. _Contributors: _David Braddon-Mitchell, Mark Colyvan, Frank Jackson, Justine Kingsbury, Fred Kroon, David Lewis, Dustin Locke, Kelby Mason, Jonathan McKeown-Green, Peter Menzies, Robert Nola, Daniel Nolan, Philip Pettit, Huw Price, Denis Robinson, Steve Stich, Daniel Stoljar The hardcover edition does not include a dust jacket. (shrink)
Part 1: Metaphysics and Conceptual Analysis 1. Analysis, description and the a priori?, Simon Blackburn 2. Physicalism, conceptual analysis and acts of faith, Jennifer Hornsby 3. Serious metaphysics: Frank Jackson’s defense of conceptual analysis, William G. Lycan 4. Jackson’s classical model of meaning, Laura Schroeter & John Bigelow 5. The semantic foundations of metaphysics, Huw Price 6. The folk theory of colours and the causes of colour experience, Peter Menzies Part 2: The Knowledge Argument 7. Consciousness and the frustrations (...) of physicalism, Philip Pettit 8. Jackson’s change of mind: representationalism, a priorism and the knowledge argument, Robert Van Gulick Part 3: Ethics 9. Analytic moral functionalism meets moral twin earth, Terrence Horgan & Mark Timmons 10. Consequentialism and the nearest and dearest objection, Michael Smith 11. The ’actual’ in actualism, Julia Driver Part 4: Conditionals and the Purposes of Arguing 12. Conditionals, truth and assertion, Dorothy Edgington 13. Conditionals: A debate with Jackson, Graham Priest 14. Two purposes of arguing and two epistemic projects, Martin Davies Replies to my critics, Frank Jackson. (shrink)
[Robert Stalnaker] Saul Kripke made a convincing case that there are necessary truths that are knowable only a posteriori as well as contingent truths that are knowable a priori. A number of philosophers have used a two-dimensional model semantic apparatus to represent and clarify the phenomena that Kripke pointed to. According to this analysis, statements have truth-conditions in two different ways depending on whether one considers a possible world 'as actual' or 'as counterfactual' in determining the truth-value of the (...) statement relative to that possible world. There are no necessary a posteriori or contingent a priori propositions: rather, contingent a priori and necessary a posteriori statements are statements that are necessary when evaluated one way, and contingent when evaluated the other way. This paper distinguishes two ways that the two-dimensional framework can be interpreted, and argues that one of them gives the better account of what it means to 'consider a world as actual', but that it provides no support for any notion of purely conceptual a priori truth. /// [Thomas Baldwin] Two-dimensional possible world semantic theory suggests that Kripke's examples of the necessary a posteriori and contingent a priori should be handled by interpreting names as implicitly indexical. Like Stalnaker, I reject this account of names and accept that Kripke's examples have to be accommodated within a metasemantic theory. But whereas Stalnaker maintains that a metasemantic approach undermines the conception of a priori truth, I argue that it offers the opportunity to develop a conception of the a priori aspect of stipulations, conceived as linguistic performances. The resulting position accommodates Kripke's examples in a way which is both intrinsically plausible and fits with Kripke's actual discussion of them. (shrink)
The so-called Canberra Plan is a grandchild of the Ramsey-Carnap treatment of theoretical terms. In its original form, the Ramsey-Carnap approach provided a method for analysing the meaning of scientific terms, such as “electron”, “gene” and “quark”—terms whose meanings could plausibly be delineated by their roles within scientific theories. But in the hands of David Lewis (1970, 1972), the original approach begat a more ambitious descendant, generalised and extended in two distinct ways: first, Lewis applied the technique to analyse the (...) meaning of terms introduced not just by explicit scientific theories, but also by implicit folk theories such as folk psychology; second, he supplemented the theory to provide an account of the way in which the referents of the analysed terms might be identified on the basis of empirical investigation. (shrink)
"The availability of a paperback version of Boyle's philosophical writings selected by M. A. Stewart will be a real service to teachers, students, and scholars with seventeenth-century interests. The editor has shown excellent judgment in bringing together many of the most important works and printing them, for the most part, in unabridged form. The texts have been edited responsibly with emphasis on readability.... Of special interest in connection with Locke and with the reception of Descarte's Corpuscularianism, to students of the (...) Scientific Revolution and of the history of mechanical philosophy, and to those interested in the relations among science, philosophy, and religion. In fact, given the imperfections in and unavailability of the eighteenth-century editions of Boyle’s works, this collection will benefit a wide variety of seventeenth-century scholars." --Gary Hatfield, University of Pennsylvania. (shrink)
Hegel’s True Infinite is “well known” but there is little consensus concerning its meaning. The true infinite is introduced in Hegel’s deconstruction of traditional conceptions of quality, determinacy and reality as wholly positive and from which negation, limitation and determinacy are excluded. Everything is other than and unrelated to everything else. These assumptions yield the stubborn category of finitude as an absolute limit, and of God as abstract unknowable Beyond. But Hegel claims that every attempt to separate the infinite from (...) the finite makes the infinite itself finite—the spurious infinite, the “ought.” The true infinite is the negation/correction of the spurious infinite; it reinstates the relations suppressed by the understanding. The true infinite is an ontotheological conception of a social infinite: it is both absolute—in and for itself—and related—being for an other—to wit, an articulated, inclusive whole. It is not an acosmic pantheism like Spinoza’s that defrauds difference and finitude of their due. The true infinite presupposes as its corollary the idealit y of the finite. The latter articulates the ontological status of the finite as sublated in the true infinite, i.e. as a member both distinct from and related to the true infinite. The true infinite is the whole present in its members. The true infinite is neither traditional theism, nor atheism nor pantheism, nor a projection of finitude. It is best understood as panentheism. (shrink)
Drawing on Aristotle’s notion of “ultimate responsibility,” Robert Kane argues that to be exercising a free will an agent must have taken some character forming decisions for which there were no sufficient conditions or decisive reasons.<sup>1</sup> That is, an agent whose will is free not only had the ability to develop other dispositions, but could have exercised that ability without being irrational. To say it again, a person has a free will just in case her character is the product (...) of decisions that she could have rationally avoided making. That one’s character is the product of such decisions entails ultimate responsibility for its manifestations, engendering a free will. (shrink)
Winner of the 1975 National Book Award, this brilliant and widely acclaimed book is a powerful philosophical challenge to the most widely held political and social positions of our age--liberal, socialist, and conservative.
Since the publication of Edmund Gettier's challenge to the traditional epistemological doctrine of knowledge as justified true belief, Roberts and Wood claim that epistemologists lapsed into despondency and are currently open to novel approaches. One such approach is virtue epistemology, which can be divided into virtues as proper functions or epistemic character traits. The authors propose a notion of regulative epistemology, as opposed to a strict analytic epistemology, based on intellectual virtues that function not as rules or even as skills (...) but as habits of the heart. To that end, they divide the task of clarifying and expounding their notion in the book's two parts.In the first part, Roberts and Wood examine various components that constitute their notion of regulative epistemology. The first are the epistemic goods or goals that drive the epistemic process. What is needed, claim Roberts and Wood, is an enriched notion of these goods rather than the restricted notion of justified true belief. Epistemic agents are more than calculating devices in that …. (shrink)
Kelly Aguirre, Phil Henderson, Cressida J. Heyes, Alana Lentin, and Corey Snelgrove engage with different aspects of Robert Nichols’ Theft is Property! Dispossession and Critical Theory. Henderson focuses on possible spaces for maneuver, agency, contradiction, or failure in subject formation available to individuals and communities interpellated through diremptive processes. Heyes homes in on the ritual of antiwill called “consent” that systematically conceals the operation of power. Aguirre foregrounds tensions in projects of critical theory scholarship that aim for dialogue and (...) solidarity with Indigenous decolonial struggles. Lentin draws attention to the role of race in undergirding the logic of Anglo-settler colonial domination that operates through dispossession, while Snelgrove emphasizes the link between alienation, capital, and colonialism. In his reply to his interlocutors, Nichols clarifies aspects of his “recursive logics” of dispossession, a dispossession or theft through which the right to property is generated. (shrink)
Delusional beliefs have sometimes been considered as rational inferences from abnormal experiences. We explore this idea in more detail, making the following points. Firstly, the abnormalities of cognition which initially prompt the entertaining of a delusional belief are not always conscious and since we prefer to restrict the term “experience” to consciousness we refer to “abnormal data” rather than “abnormal experience”. Secondly, we argue that in relation to many delusions (we consider eight) one can clearly identify what the abnormal cognitive (...) data are which prompted the delusion and what the neuropsychological impairment is which is responsible for the occurrence of these data; but one can equally clearly point to cases where this impairments is present but delusion is not. So the impairment is not sufficient for delusion to occur. A second cognitive impairment, one which impairs the ability to evaluate beliefs, must also be present. Thirdly (and this is the main thrust of our chapter) we consider in detail what the nature of the inference is that leads from the abnormal data to the belief. This is not deductive inference and it is not inference by enumerative induction; it is abductive inference. We offer a Bayesian account of abductive inference and apply it to the explanation of delusional belief. (shrink)
Contemporary commentators on Hume's essay, ‘Of miracles’ have increasingly tended to argue that Hume never intended to suggest that testimonial evidence must always be insufficient to justify belief in a miracle. This is in marked contrast to earlier commentators who interpreted Hume as intending to demonstrate that testimonial evidence is incapable in principle of ever establishing rational belief in a miracle. In this article I argue that this traditional interpretation is the correct one.
Robert Brandom's latest book, the product of his John Locke lectures in Oxford in 2006, is a return to the philosophy of language and is easily read as a continuation and development of the views defended in Making it Explicit. The text of the lectures is presented much as they were delivered, but it contains an ‘Afterword’ of more than 30 pages which responds to questions raised when he gave the lectures, and also when they were subsequently delivered in (...) Prague the following year. The published text also contains relatively technical appendices to two of the lectures.The individual lectures engage with some important and difficult issues, often ones that were explored in detail in the earlier book. However, these discussions are located within a broader meta-philosophical context, and it says something about the abstract and difficult character of these views that they provide the main subject matter of the Afterword. This framework affects how we should understand the relations between this book and Making It Explicit too. Although most of the detailed discussions happily belong within the general project of the earlier book, they are offered as illustrations of a framework that is independent of this project. Indeed, Brandom suggests that defenders of the semantic views of David Lewis, for example, could embrace his main message as well as those who favour Brandom's own form of pragmatism.Neo-pragmatist philosophers such as Brandom's teacher, Richard Rorty, often present themselves as rejecting the analytic tradition in philosophy. When Brandom describes the ‘pragmatist challenge’ to the ‘classical project of analysis’, he appeals to the criticisms found in the work of Wittgenstein and Sellars that are often appealed to by the critics of the analytic tradition. The message of the new book is that the views he has built on this …. (shrink)
The traditional problem of evil is set forth, by no means for the first time, in Part X of Hume's Dialogues Concerning Natural Religion in these familiar words: ‘Is [God] willing to prevent evil, but not able? then he is impotent. Is he able, but not willing? then he is malevolent. Is he both able and willing? whence then is evil?’ This formulation of the problem of evil obviously suggests an argument to the effect that the existence of evil in (...) the world demonstrates that God does not exist. The purpose of this paper is to examine this argument, with a view to showing that while it is not a conclusive argument, it is much stronger than some apologists for traditional theism allow. (shrink)
In this interview, Lani Roberts provides a philosophical justification for the study of diversity issues and highlights the pedagogical methods needed to prepare students to live and thrive in a diverse society. This article is a partial transcript of a recorded interview.
What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...) of healthcare systems in ways that are perhaps under-appreciated. Enthusiasts for AI have held out the prospect that it will free physicians up to spend more time attending to what really matters to them and their patients. We will argue that this claim depends upon implausible assumptions about the institutional and economic imperatives operating in contemporary healthcare settings. We will also highlight important concerns about privacy, surveillance, and bias in big data, as well as the risks of over trust in machines, the challenges of transparency, the deskilling of healthcare practitioners, the way AI reframes healthcare, and the implications of AI for the distribution of power in healthcare institutions. We will suggest that two questions, in particular, are deserving of further attention from philosophers and bioethicists. What does care look like when one is dealing with data as much as people? And, what weight should we give to the advice of machines in our own deliberations about medical decisions? (shrink)
Drawing on Aristotle’s notion of “ultimate responsibility,” Robert Kane argues that to be exercising a free will an agent must have taken some character forming decisions for which there were no sufficient conditions or decisive reasons. That is, an agent whose will is free not only had the ability to develop values and beliefs besides those that presently make up her motives, but could have exercised that ability without being irrational. An agent wills freely, on this view, by beingultimately (...) responsible for how she is currently disposed to act. Kane needs, then, to show how an agent could be responsible for decisions that her deliberations did not guarantee. He must also explain how a decision for which there is no decisive reason could yet be rational, assuming that the responsibility engendering decisions forming the basis of a free will would be rational. I shall argue here that Kane has achieved neither of these goals. (shrink)
In this paper we defend the view that the ordinary notions of cause and effect have a direct and essential connection with our ability to intervene in the world as agents.1 This is a well known but rather unpopular philosophical approach to causation, often called the manipulability theory. In the interests of brevity and accuracy, we prefer to call it the agency theory.2 Thus the central thesis of an agency account of causation is something like this: an event A is (...) a cause of a distinct event B just in case bringing about the occurrence of A would be an effective means by which a free agent could bring about the occurrence of B. In our view the unpopularity of the agency approach to causation may be traced to two factors. The first is a failure to appreciate certain distinctive advantages that this approach has over its various rivals. We have drawn attention to some of these advantages elsewhere, and we summarize below. However, the second and more important factor is the influence of a number of stock objections, objections that seem to have persuaded many philosophers that agency accounts face insuperable obstacles. In this paper we want to show that these objections have been vastly overrated. There are four main objections. (shrink)
In his paper ‘Miracles: metaphysics, physics, and physicalism’, 1 Kirk McDermid appears to have two primary goals. The first is to demonstrate that my account of how God might produce a miracle without violating any laws of nature is radically flawed. The second is to suggest two alternative accounts, one suitable for a deterministic world, one suitable for an indeterministic world, which allow for the occurrence of a miracle without violation of the laws of nature, yet do not suffer from (...) the defects of what McDermid terms the ‘Larmerian’ model. I briefly describe my model, reply to McDermid's criticism of it, and evaluate his alternative accounts. (shrink)
Among moral attributes true virtue alone is sublime. … [I]t is only by means of this idea [of virtue] that any judgment as to moral worth or its opposite is possible. … Everything good that is not based on a morally good disposition … is nothing but pretence and glittering misery. 1.
The deflationary aim of this book, which occupies Part I, is to show that a widely held view has little to be said for it. The constructive aim, pursued in Part II, is to make plausible a measure-theoretic account of propositional attitudes. The discussion is throughout instructive, illuminating and sensitive to the many intricacies surrounding attitude ascriptions and how they can carry information about a subject's psychology. There is close engagement with cognitive science. The book should be read by anyone (...) seriously engaged with issues about propositional attitudes.According to the widely held view, which Matthews calls the Received View, the attitude of Φing that p is a matter of standing in a computational/functional relation to an explicit Representation that expresses the proposition that p, and thinking is ‘an inferential computational process defined over one or more of these Representations that eventuates in the production of either another Representation or a behavior’. The representations are understood to be sentences in a language of thought and thus to have a compositional syntax and semantics. The theory that Matthews aims to make plausible has it that ascriptions of propositional attitudes in the form ‘X Φs that p’, ascribe a state to a person by relating that person to an abstract object that is the representative of the state in roughly the way that numbers on a scale are the measure-theoretic representations of certain physical magnitudes. We are to think of the role of ‘Jones believes that interest rates will fall’ by analogy with that of ‘Jones weighs 150lbs’. The latter depends on there being arithmetical relations defined over numbers that enable its particular assignment of a number to Jones's weight to represent physical properties that Jones has in virtue …. (shrink)