Jim Joyce has presented an argument for Probabilism based on considerations of epistemic utility [Joyce, 1998]. In a recent paper, I adapted this argument to give an argument for Probablism and the Principal Principle based on similar considerations [Pettigrew, 2012]. Joyce’s argument assumes that a credence in a true proposition is better the closer it is to maximal credence, whilst a credence in a false proposition is better the closer it is to minimal credence. By contrast, my argument in (...) that paper assumed (roughly) that a credence in a proposition is better the closer it is to the objective chance of that proposition. In this paper, I present an epistemic utility argument for Probabilism and the Principal Principle that retains Joyce’s assumption rather than the alternative I endorsed in the earlier paper. I argue that this results in a superior argument for these norms. (shrink)
A chance-credence norm states how an agent's credences in propositions concerning objective chances ought to relate to her credences in other propositions. The most famous such norm is the Principal Principle (PP), due to David Lewis. However, Lewis noticed that PP is too strong when combined with many accounts of chance that attempt to reduce chance facts to non-modal facts. Those who defend such accounts of chance have offered two alternative chance-credence norms: the first is Hall's and Thau's New Principle (...) (NP); the second is Ismael's General Recipe (IP). Thus, the question arises: Should we adopt NP or IP or both? In this paper, I argue that IP has unacceptable consequences when coupled with reductionism, so we must accept NP alone. (shrink)
There are many kinds of epistemic experts to which we might wish to defer in setting our credences. These include: highly rational agents, objective chances, our own future credences, our own current credences, and evidential probabilities. But exactly what constraint does a deference requirement place on an agent's credences? In this paper we consider three answers, inspired by three principles that have been proposed for deference to objective chances. We consider how these options fare when applied to the other kinds (...) of epistemic experts mentioned above. Of the three deference principles we consider, we argue that two of the options face insuperable difficulties. The third, on the other hand, fares well|at least when it is applied in a particular way. (shrink)
Gottlob Frege's Grundgesetze der Arithmetik, or Basic Laws of Arithmetic, was intended to be his magnum opus, the book in which he would finally establish his logicist philosophy of arithmetic. But because of the disaster of Russell's Paradox, which undermined Frege's proofs, the more mathematical parts of the book have rarely been read. Richard G.
So-called 'Frege cases' pose a challenge for anyone who would hope to treat the contents of beliefs (and similar mental states) as Russellian propositions: It is then impossible to explain people's behavior in Frege cases without invoking non-intentional features of their mental states, and doing that seems to undermine the intentionality of psychological explanation. In the present paper, I develop this sort of objection in what seems to me to be its strongest form, but then offer a response to it. (...) I grant that psychological explanation must invoke non-intentional features of mental states, but it is of crucial importance which such features must be referenced. -/- It emerges from a careful reading of Frege's own view that we need only invoke what I call 'formal' relations between mental states. I then claim that referencing such 'formal' relations within psychological explanation does not undermine its intentionality in the way that invoking, say, neurological features would. The central worry about this view is that either (a) 'formal' relations bring narrow content in through back door or (b) 'formal' relations end up doing all the explanatory work. Various forms of each worry are discussed. The crucial point, ultimately, is that the present strategy for responding to Frege cases is not available either to the 'psycho-Fregean', who would identify the content of a belief with its truth-value, nor even to someone who would identify the content of a belief with a set of possible worlds. It requires the sort of rich semantic structure that is distinctive of Russellian propositions. There is therefore no reason to suppose that the invocation of 'formal' relations threatens to deprive content of any work to do. (shrink)
In Mind and World, John McDowell argues against the view that perceptual representation is non-conceptual. The central worry is that this view cannot offer any reasonable account of how perception bears rationally upon belief. I argue that this worry, though sensible, can be met, if we are clear that perceptual representation is, though non-conceptual, still in some sense 'assertoric': Perception, like belief, represents things as being thus and so.
In this exciting new collection, a distinguished international group of philosophers contribute new essays on central issues in philosophy of language and logic, in honor of Michael Dummett, one of the most influential philosophers of the late twentieth century. The essays are focused on areas particularly associated with Professor Dummett. Five are contributions to the philosophy of language, addressing in particular the nature of truth and meaning and the relation between language and thought. Two contributors discuss time, in particular the (...) reality of the past. The last four essays focus on Frege and the philosophy of mathematics. The volume represents some of the best work in contemporary analytical philosophy. (shrink)
A brief, non-technical introduction to technical and philosophical aspects of Frege's philosophy of arithmetic. The exposition focuses on Frege's Theorem, which states that the axioms of arithmetic are provable, in second-order logic, from a single non-logical axiom, "Hume's Principle", which itself is: The number of Fs is the same as the number of Gs if, and only if, the Fs and Gs are in one-one correspondence.
John Etchemendy has argued that it is but "a fortuitous accident" that Tarski's work on truth has any signifance at all for semantics. I argue, in response, that Etchemendy and others, such as Scott Soames and Hilary Putnam, have been misled by Tarski's emphasis on definitions of truth rather than theories of truth and that, once we appreciate how Tarski understood the relation between these, we can answer Etchemendy's implicit and explicit criticisms of neo-Davidsonian semantics.
In The Varieties of Reference, Gareth Evans argues that the content of perceptual experience is nonconceptual, in a sense I shall explain momentarily. More recently, in his book Mind and World, John McDowell has argued that the reasons Evans gives for this claim are not compelling and, moreover, that Evans’s view is a version of “the Myth of the Given”: More precisely, Evans’s view is alleged to suffer from the same sorts of problems that plague sense-datum theories of perception. In (...) particular, McDowell argues that perceptual experience must be within “the space of reasons,” that perception must be able to give us reasons for, that is, to justify, our beliefs about the world: And, according to him, no state that does not have conceptual content can be a reason for a belief. Now, there are many ways in which Evans’s basic idea, that perceptual content is nonconceptual, might be developed; some of these, I shall argue, would be vulnerable to the objections McDowell brings against him. But I shall also argue that there is a way of developing it that is not vulnerable to these objections. (shrink)
Hartry Field has suggested that we should adopt at least a methodological deflationism: [W]e should assume full-fledged deflationism as a working hypothesis. That way, if full-fledged deflationism should turn out to be inadequate, we will at least have a clearer sense than we now have of just where it is that inflationist assumptions ... are needed. I argue here that we do not need to be methodological deflationists. More pre-cisely, I argue that we have no need for a disquotational truth-predicate; (...) that the word true, in ordinary language, is not a disquotational truth-predicate; and that it is not at all clear that it is even possible to introduce a disquotational truth-predicate into ordinary language. If so, then we have no clear sense how it is even possible to be a methodological deflationist. My goal here is not to convince a committed deflationist to abandon his or her position. My goal, rather, is to argue, contrary to what many seem to think, that reflection on the apparently trivial character of T-sentences should not incline us to deflationism. (shrink)
My purpose is to account for some oddities in what Kant did and did not say about "moral worth," and for another in what commentators tell us about his intent. The stone with which I hope to dispatch these several birds is-as one would expect a philosopher's stone to be-a distinction. I distinguish between two things Kant might have had in mind under the heading of moral worth. They come readily to mind when one both takes account of what he (...) actually said about it and notices a fact which he did not seem to notice: namely, that dutiful action- action which, whatever its motive, fulfills a duty-can be over- determined, and determined in particular by both respect for duty and some consortium of inclinations and prudenc. (shrink)
In recent work on Frege, one of the most salient issues has been whether he was prepared to make serious use of semantical notions such as reference and truth. I argue here Frege did make very serious use of semantical concepts. I argue, first, that Frege had reason to be interested in the question how the axioms and rules of his formal theory might be justified and, second, that he explicitly commits himself to offering a justification that appeals to the (...) notion of reference. I then discuss the justifications Frege offered, focusing on his discussion of inferences involving free variables, in section 17 of Grundgesetze, and his argument, in sections 29-32, that every well-formed expression of his formal language has a unique reference. (shrink)
The paper formulates and proves a strengthening of ‘Frege’s Theorem’, which states that axioms for second-order arithmetic are derivable in second-order logic from Hume’s Principle, which itself says that the number of Fs is the same as the number ofGs just in case the Fs and Gs are equinumerous. The improvement consists in restricting this claim to finite concepts, so that nothing is claimed about the circumstances under which infinite concepts have the same number. ‘Finite Hume’s Principle’ also suffices for (...) the derivation of axioms for arithmetic and, indeed, is equivalent to a version of them, in the presence of Frege’s definitions of the primitive expressions of the language of arithmetic. The philosophical significance of this result is also discussed. (shrink)
The 'substitution argument' purports to demonstrate the falsity of Russellian accounts of belief-ascription by observing that, e.g., these two sentences: (LC) Lois believes that Clark can fly. (LS) Lois believes that Superman can fly. could have different truth-values. But what is the basis for that claim? It seems widely to be supposed, especially by Russellians, that it is simply an 'intuition', one that could then be 'explained away'. And this supposition plays an especially important role in Jennifer Saul's defense of (...) Russellianism, based upon the existence of an allegedly similar contrast between these two sentences: (PC) Superman is more popular than Clark. (PS) Superman is more popular than Superman. The latter contrast looks pragmatic. But then, Saul asks, why shouldn't we then say the same about the former? The answer to this question is that the two cases simply are not similar. In the case of (PC) and (PS), we have only the facts that these strike us differently, and that people will sometimes say things like (PC), whereas they will never say things like (PS). By contrast, there is an argument to be given that (LS) can be true even if (LC) is false, and this argument does not appeal to anyone's 'intuitions'. The main goal of the paper is to present such a version of the substitution argument, building upon the treatment of the Fregan argument against Russellian accounts of belief itself in "Solving Frege's Puzzle". A subsidiary goal is to contribute to the growing literature arguing that 'intuitions' simply do not play the sort of role in philosophical inquiry that so-called 'experimental philosophers' have supposed they do. (shrink)
As fundamental researchers in the neuroethology of efference copy, we were stimulated by Grush's bold and original synthesis. In the following critique, we draw attention to ways in which it might be tested in the future, we point out an avoidable conceptual error concerning emulation that Grush seems to share with other workers in the field, and we raise questions about the neural correlates of Grush's schemata that might be probed by neurophysiologists.
The purpose of this note is to present a strong form of the liar paradox. It is strong because the logical resources needed to generate the paradox are weak, in each of two senses. First, few expressive resources required: conjunction, negation, and identity. In particular, this form of the liar does not need to make any use of the conditional. Second, few inferential resources are required. These are: (i) conjunction introduction; (ii) substitution of identicals; and (iii) the inference: From ¬(p (...) ∧ p), infer ¬ p. It is, interestingly enough, also essential to the argument that the ‘strong’ form of the diagonal lemma be used: the one that delivers a term λ such that we can prove: λ = ¬ T(⌈λ⌉); rather than just a sentence Λ for which we can prove: Λ ≡ ¬T(⌈Λ⌉). The truth-theoretic principles used to generate the paradox are these: ¬(S ∧ T(⌈¬S⌉); and ¬(¬S ∧ ¬T(⌈¬S⌉). These are classically equivalent to the two directions of the T-scheme, but they are intuitively weaker. The lesson I would like to draw is: There can be no consistent solution to the Liar paradox that does not involve abandoning truth-theoretic principles that should be every bit as dear to our hearts as the T-scheme. So we shall have to learn to live with the Liar, one way or another. (shrink)
Some years ago, Machery, Mallon, Nichols, and Stich reported the results of experiments that reveal, they claim, cross-cultural differences in speaker’s ‘intuitions’ about Kripke’s famous Gödel–Schmidt case. Several authors have suggested, however, that the question they asked their subjects is ambiguous between speaker’s reference and semantic reference. Machery and colleagues have since made a number of replies. It is argued here that these are ineffective. The larger lesson, however, concerns the role that first-order philosophy should, and more importantly should not, (...) play in the design of such experiments and in the evaluation of their results. (shrink)
Depression and the subjective experience of suffering are distinct forms of distress, but they are sometimes commingled with one another. Using a cross-sectional sample of flight attendants, we tested for further empirical evidence distinguishing depression and suffering. Correlations with 15 indices covering several dimensions of well-being indicated that associations with worse well-being were mostly stronger for depression than suffering. There was a large positive correlation between depression and suffering, but we also found evidence of notable non-concurrent depression and suffering in (...) the sample. After dividing participants into four groups that varied based on severity of depression and suffering, regression analyses showed higher levels of well-being among those with both none-mild depression and none-mild suffering compared to those with moderate-severe depression, moderate-severe suffering, or both. All indices of well-being were lowest among the group of participants with moderate-severe depression and moderate-severe suffering. In addition to providing further evidence supporting a distinction between depression and suffering, our findings suggest that concurrent depression and suffering may be more disruptive to well-being than when either is present alone. (shrink)
Contrairement à une croyance trop répandue, le darwinisme et son prolongement au XXe siècle — le néo-darwinisme — ne portent pas sur une idée de l'évolution fondée sur la simple notion de « la survie du plus apte ». Si la théorie de la sélection naturelle est partie intégrante du néo-darwinisme, plusieurs de ses fondateurs seront en quête d'une conception beaucoup plus généreuse, pleine et compréhensive de l'évolution. En réalité, la révolution dite darwinienne s'insère au coeur d'une révolution intellectuelle beaucoup (...) plus importante : la révolution transformiste. Avant d'être des darwiniens, de dignes représentants de cette mouvance s'afficheront comme étant des transformistes. Cela signifie que, en plus des mécanismes de l'évolution biologique, d'autres éléments tout aussi cruciaux seront pris en considération`, dans l'élaboration d'une véritable synthèse évolutionniste : les rapports entre l'évolution biologique et l'évolution cosmique ; les interrogations portant sur la question d'une possible direction évolutive ; l'enseignement à tirer pour l'homme de sa place et de son rôle dans la nature. A la croisée de l'histoire, de la philosophie et de la science, cet ouvrage cherche à démontrer, à travers l'analyse des travaux de plusieurs néo-darwiniens de premier plan, que la révolution darwinienne demeurera incomplète aussi longtemps que la révolution transformiste le restera. (shrink)
In his paper “Flaws of Formal Relationism”, Mahrad Almotahari argues against the sort of response to Frege's Puzzle I have defended elsewhere, which he dubs ‘Formal Relationism’. Almotahari argues that, because of its specifically formal character, this view is vulnerable to objections that cannot be raised against the otherwise similar Semantic Relationism due to Kit Fine. I argue in response that Formal Relationism has neither of the flaws Almotahari claims to identify.
Why do people in more unequal societies have worse health and shorter lives than those in less unequal ones? Why do more unequal societies tend to have more violence and weaker community life? This paper discusses the research evidence on the psychosocial pathways which suggest how and why we are affected by inequality.How big income differences are in any society seems to serve as an indicator of the scale of social differentiation and social distances within it. The evidence shows that (...) more hierarchical societies incur a wide range of social costs reflecting the corrosive effects of inequality. But why are we so sensitive to inequality? Epidemiological research on health inequalities and the social determinants of health has demonstrated that the quality of the social environment has powerful effects on health. Particularly important are social status, friendship and early childhood experience. The indications are that poor health may share causal pathways with many other social problems associated with relative deprivation - including violence.Summarizing my recent book, The Impact of Inequality , this paper provides an account of how inequality gets under the skin to affect both health and wellbeing. Rather than making comparisons with some impractical state of complete equality, all the evidence presented shows the importance of the differences in inequality between different states of the USA or between different developed market democracies: it shows that even small increases in equality matter. (shrink)
The 1920-1960 period saw the creation of the conditions for a unification of disciplines in the area of evolutionary biology under a limited number of theoretical prescriptions: the evolutionary synthesis. Whereas the sociological dimension of this synthesis was fairly successful, it was surprisingly loose when it came to the interpretation of the evolutionary mechanisms per se, and completely lacking at the level of the foundational epistemological and metaphysical commitments. Key figures such as Huxley, Simpson, Dobzhansky, and Rensch only paid lip (...) service to the conceptual dimension of the evolutionary synthesis, as they eventually realized that a number of evolutionary phenomena could not be explained by its narrow theoretical corpus. Apparently, the evolutionary synthesis constituted a premature event in the development of evolutionary biology. Not only are the real achievements of the evolutionary synthesis in need of reevaluation, but this reassessment also has important implications for the historiography of Darwinism and the current debates about the darwinian movement. (shrink)
Frege, famously, held that there is a close connection between our concept of cardinal number and the notion of one-one correspondence, a connection enshrined in Hume's Principle. Husserl, and later Parsons, objected that there is no such close connection, that our most primitive conception of cardinality arises from our grasp of the practice of counting. Some empirical work on children's development of a concept of number has sometimes been thought to point in the same direction. I argue, however, that Frege (...) was close to right, that our concept of cardinal number is closely connected with a notion like that of one-one correspondence, a more primitive notion we might call just as many. (shrink)
Øystein Linnebo has recently shown that the existence of successors cannot be proven in predicative Frege arithmetic, using Frege’s definitions of arithmetical notions. By contrast, it is shown here that the existence of successor can be proven in ramified predicative Frege arithmetic.
This book by Richard G. Stevens is a comprehensive introduction to the nature of political philosophy. It offers definitions of philosophy and politics, showing the tension between the two and the origin of political philosophy as a means of resolution of that tension. Plato and Aristotle are examined in order to see the search for the best political order. Inquiry is then made into political philosophy's new tension brought about by the growth of revealed religion in the Middle Ages. (...) It then examines the changes introduced by modernity and gives an overview of postmodern political thought. The book covers the most influential philosophers and directs readers to the classics of political philosophy, guiding them in studying them. It is an approachable introduction to a complex subject, not just a history of it. It is a point of entry into the subject for students and for others as well. (shrink)
The paper formulates and proves a strengthening of 'Frege's Theorem', which states that axioms for second-order arithmetic are derivable in second-order logic from Hume's Principle, which itself says that the number of Fs is the same as the number of Gs just in case the Fs and Gs are equinumerous. The improvement consists in restricting this claim to finite concepts, so that nothing is claimed about the circumstances under which infinite concepts have the same number. 'Finite Hume's Principle' also suffices (...) for the derivation of axioms for arithmetic and, indeed, is equivalent to a version of them, in the presence of Frege's definitions of the primitive expressions of the language of arithmetic. The philosophical significance of this result is also discussed. (shrink)
The main focus of my comments is the role played in Dickie's view by the idea that "the mind has a need to represent things outside itself". But there are also some remarks about her (very interesting) suggestion that descriptive names can sometimes fail to refer to the object that satisfies the associated description.
Written as a comment on Crispin Wright's "Vagueness: A Fifth Column Approach", this paper defends a form of supervaluationism against Wright's criticisms. Along the way, however, it takes up the question what is really wrong with Epistemicism, how the appeal of the Sorities ought properly to be understood, and why Contextualist accounts of vagueness won't do.
Psychoanalysis is often equated with Sigmund Freud, but this comparison ignores the wide range of clinical practices, observational methods, general theories, and cross-pollinations with other disciplines that characterise contemporary psychoanalytic work. Central psychoanalytic concepts to do with unconscious motivation, primitive forms of thought, defence mechanisms, and transference form a mainstay of today's richly textured contemporary clinical psychological practice. -/- In this landmark collection on philosophy and psychoanalysis, leading researchers provide an evaluative overview of current thinking. Written at the interface between (...) these two disciplines, the Oxford Handbook of Philosophy and Psychoanalysis contains original contributions that will shape the future of debate. With 34 chapters divided into eight sections covering history, clinical theory, phenomenology, science, aesthetics, religion, ethics, and political and social theory, this Oxford Handbook displays the enduring depth, breadth, and promise of integrating philosophical and psychoanalytic thought. -/- Anyone interested in the philosophical implications of psychoanalysis, as well as philosophical challenges to and re-statements of psychoanalysis, will want to consult this book. It will be a vital resource for academic researchers, psychoanalysts and other mental health professionals, graduates, and trainees. (shrink)