This article responds to Terry Eagleton's claim that Spivak's latest book, A Critique of Postcolonial Reason, works against the intent of postcolonial criticism. Reading the work as a search for a just representational strategy, we explore the implications of Spivak's engagement with philosophy - Kant, Hegel, and Marx. As a disciplinary machine, philosophy produces Western subjects who are engendered by simultaneously including and excluding the other. Working through this production of the double location of the 'other' we suggest that systematic (...) thought is inhabited by an absence that is present within, a disturbing otherness that ultimately questions authority and stability, and opens up the question of politics and representation. Drawing Spivak into the representational problematic opened up by Lyotard, we suggest that a responsible postcolonial intervention can be performed in the difficult exergue between representability and unrepresentability. In this account, representation is open to invention, to finding new idioms for articulating otherness. (shrink)
Shaun Nichols offers a naturalistic, psychological account of the origins of the problem of free will. He argues that our belief in indeterminist choice is grounded in faulty inference and therefore unjustified, goes on to suggest that there is no single answer to whether free will exists, and promotes a pragmatic approach to prescriptive issues.
Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that's increasingly making us behave like simple machines? In this wide-reaching, interdisciplinary book, Brett Frischmann and Evan Selinger examine what's happening to our lives as society embraces big data, predictive analytics, and smart environments. They explain how the goal of designing programmable worlds goes hand (...) in hand with engineering predictable and programmable people. Detailing new frameworks, provocative case studies, and mind-blowing thought experiments, Frischmann and Selinger reveal hidden connections between fitness trackers, electronic contracts, social media platforms, robotic companions, fake news, autonomous cars, and more. This powerful analysis should be read by anyone interested in understanding exactly how technology threatens the future of our society, and what we can do now to build something better. (shrink)
Kelly Aguirre, Phil Henderson, Cressida J. Heyes, Alana Lentin, and Corey Snelgrove engage with different aspects of Robert Nichols’ Theft is Property! Dispossession and Critical Theory. Henderson focuses on possible spaces for maneuver, agency, contradiction, or failure in subject formation available to individuals and communities interpellated through diremptive processes. Heyes homes in on the ritual of antiwill called “consent” that systematically conceals the operation of power. Aguirre foregrounds tensions in projects of critical theory scholarship that aim for dialogue and solidarity (...) with Indigenous decolonial struggles. Lentin draws attention to the role of race in undergirding the logic of Anglo-settler colonial domination that operates through dispossession, while Snelgrove emphasizes the link between alienation, capital, and colonialism. In his reply to his interlocutors, Nichols clarifies aspects of his “recursive logics” of dispossession, a dispossession or theft through which the right to property is generated. (shrink)
Drawing on recent advances in evolutionary biology, prominent scholars return to the question posed in a pathbreaking book: how evolution itself evolved.
This is a book about the theory of the city or commonwealth, what would come to be called the state, in early modern natural law discourse. Annabel Brett takes a fresh approach by looking at this political entity from the perspective of its boundaries and those who crossed them. She begins with a classic debate from the Spanish sixteenth century over the political treatment of mendicants, showing how cosmopolitan ideals of porous boundaries could simultaneously justify the freedoms of itinerant (...) beggars and the activities of European colonists in the Indies. She goes on to examine the boundaries of the state in multiple senses, including the fundamental barrier between human beings and animals and the limits of the state in the face of the natural lives of its subjects, as well as territorial frontiers. Drawing on a wide range of authors, Brett reveals how early modern political space was constructed from a complex dynamic of inclusion and exclusion. Throughout, she shows that early modern debates about political boundaries displayed unheralded creativity and virtuosity but were nevertheless vulnerable to innumerable paradoxes, contradictions, and loose ends.Changes of State is a major work of intellectual history that resonates with modern debates about globalization and the transformation of the nation-state. (shrink)
Emotion Review, Volume 14, Issue 3, Page 167-181, July 2022. A growing cadre of influential scholars has converged on a circumscribed definition of empathy as restricted only to feeling the same emotion that one perceives another is feeling. We argue that this restrictive isomorphic matching definition is deeply problematic because it deviates dramatically from traditional conceptualizations of empathy and unmoors the construct from generations of scientific research and clinical practice; insistence on an isomorphic form undercuts much of the functional value (...) of empathy from multiple perspectives of analysis; and combining the opposing concepts of isomorphic matching and self-other awareness implicitly requires motivational content, causing the RIM definition to implicitly require the kind of non-matching emotional content that it explicitly seeks to exclude. (shrink)
Williams (1970) argues that our intuitions about personal identity vary depending on how a given thought experiment is framed. Some frames lead us to think that persistence of self requires persistence of one's psychological characteristics; other frames lead us to think that the self persists even after the loss of one's distinctive psychological characteristics. The current paper takes an empirical approach to these issues. We find that framing does affect whether or not people judge that persistence of psychological characteristics is (...) required for persistence of self. Open-ended, abstract questions about what is required for survival tend to elicit responses that appeal to the importance of psychological characteristics. This emphasis on psychological characteristics is largely preserved even when participants are exposed to a concrete case that yields conflicting intuitions over whether memory must be preserved in order for a person to persist. Insofar as our philosophical theory of personal identity should be based on our intuitions, the results provide some support for the view that psychological characteristics really are critical for persistence of self. (shrink)
Which rules should guide our reasoning? Human reasoners often use reasoning shortcuts, called heuristics, which function well in some contexts but lack the universality of reasoning rules like deductive implication or inference to the best explanation. Does it follow that human reasoning is hopelessly irrational? I argue: no. Heuristic reasoning often represents human reasoners reaching a local rational maximum, reasoning more accurately than if they try to implement more “ideal” rules of reasoning. I argue this is a genuine rational achievement. (...) Our ideal rational advisors would advise us to reason with heuristic rules, not more complicated ideal rules. I argue we do not need a radical new account of epistemic norms to make sense of the success of heuristic reasoning. (shrink)
Hilary Greaves and David Wallace argue that conditionalization maximizes expected accuracy and so is a rational requirement, but their argument presupposes a particular picture of the bridge between rationality and accuracy: the Best-Plan-to-Follow picture. And theorists such as Miriam Schoenfield and Robert Steel argue that it's possible to motivate an alternative picture—the Best-Plan-to-Make picture—that does not vindicate conditionalization. I show that these theorists are mistaken: it turns out that, if an update procedure maximizes expected accuracy on the Best-Plan-to-Follow picture, it's (...) guaranteed to maximize expected accuracy on the Best-Plan-to-Make picture as well, in which case moving from the former to the latter can't help us avoid the conclusion that conditionalization is a rational requirement. If there's a problem with Greaves and Wallace’s argument, it must lie elsewhere. (shrink)
A Benacerraf–Field challenge is an argument intended to show that common realist theories of a given domain are untenable: such theories make it impossible to explain how we’ve arrived at the truth in that domain, and insofar as a theory makes our reliability in a domain inexplicable, we must either reject that theory or give up the relevant beliefs. But there’s no consensus about what would count here as a satisfactory explanation of our reliability. It’s sometimes suggested that giving such (...) an explanation would involve showing that our beliefs meet some modal condition, but realists have claimed that this sort of modal interpretation of the challenge deprives it of any force: since the facts in question are metaphysically necessary and so obtain in all possible worlds, it’s trivially easy, even given realism, to show that our beliefs have the relevant modal features. Here I show that this claim is mistaken—what motivates a modal interpretation of the challenge in the first place also motivates an understanding of the relevant features in terms of epistemic possibilities rather than metaphysical possibilities, and there are indeed epistemically possible worlds where the facts in question don’t obtain. (shrink)
Sensitivity has sometimes been thought to be a highly epistemologically significant property, serving as a proxy for a kind of responsiveness to the facts that ensure that the truth of our beliefs isn’t just a lucky coincidence. But it's an imperfect proxy: there are various well-known cases in which sensitivity-based anti-luck conditions return the wrong verdicts. And as a result of these failures, contemporary theorists often dismiss such conditions out of hand. I show here, though, that a sensitivity-based understanding of (...) epistemic luck can be developed that respects what was attractive about sensitivity-based approaches in the first place but that's immune to these failures. (shrink)
When epistemologists talk about knowledge, the discussions traditionally include only a small class of other epistemic notions: belief, justification, probability, truth. In this paper, we propose that epistemologists should include an additional epistemic notion into the mix, namely the notion of assuming or taking for granted.
The thesis that agents should calibrate their beliefs in the face of higher-order evidence—i.e., should adjust their first-order beliefs in response to evidence suggesting that the reasoning underlying those beliefs is faulty—is sometimes thought to be in tension with Bayesian approaches to belief update: in order to obey Bayesian norms, it's claimed, agents must remain steadfast in the face of higher-order evidence. But I argue that this claim is incorrect. In particular, I motivate a minimal constraint on a reasonable treatment (...) of the evolution of self-locating beliefs over time and show that calibrationism is compatible with any generalized Bayesian approach that respects this constraint. I then use this result to argue that remaining steadfast isn't the response to higher-order evidence that maximizes expected accuracy. (shrink)
According to recent accounts of the imagination, mental mechanisms that can take input from both imagining and from believing will process imagination-based inputs (pretense representations) and isomorphic beliefs in much the same way. That is, such a mechanism should produce similar outputs whether its input is the belief that p or the pretense representation that p. Unfortunately, there seem to be clear counterexamples to this hypothesis, for in many cases, imagining that p and believing that p have quite different psychological (...) consequences. This paper sets out some central problem cases and argues that the cases might be accommodated by adverting to the role of desires concerning real and imaginary situations. (shrink)
Ordinarily, people take themselves to know a lot. I know where I was born, I know that I have two hands, I know that two plus two equals four, and I also think I know a lot of other stuff too. However, the project of trying to provide a philosophically satisfying account of knowledge, one that holds up against skeptical challenges, has proven surprisingly difficult. Either one aims for an account of justification (and knowledge) that is epistemologically demanding, in an (...) effort to offer an account that satisfactorily addresses skepticism, or one aims for an account of justification (and knowledge) that makes sense of our ordinary knowledge claims. However, the history of contemporary epistemology tells us that you cannot have both: the former results in skepticism, the latter in an unsatisfying response to skepticism. -/- What we find, in the array of contemporary attempts to give accounts of knowledge and justification, are numerous views spread across the internalism/externalism spectrum that deal with the dilemma of the previous paragraph in different ways. Most of them are guided by the goal of accommodating our ordinary knowledge claims. This is especially true of externalist accounts, but many internalist accounts are guided by this same goal, though perhaps to a lesser degree. One kind of internalist view stands out for its insistence on providing philosophically satisfying accounts of knowledge and justification, even if doing so has skeptical implications. This is the traditional, old-fashioned, Cartesian-style internalism that was so prominent in the early 20th century and is now a minority position. -/- Unlike competing versions of epistemic internalism, the guiding principle of traditional, Cartesian-style internalism (what I will henceforth call ‘traditional internalism’) is not to accommodate our commonsense views about the rationality of our ordinary beliefs. Instead, traditional internalism emphasizes rationality’s demand for philosophical assurance, on the basis of evidence that can withstand the strongest skeptical challenges, that our ordinary beliefs (perceptual and otherwise) are true. According to the traditional internalist, the philosopher, qua philosopher, ought to begin the epistemological project from the inside, placing a premium on satisfying our philosophical curiosity. -/- Despite the relative unpopularity of traditional internalism the view can be taken to be worthy of attention for a variety of reasons. First, traditional internalism has great historical importance. Significant portions of the history of contemporary epistemology, and to a lesser degree philosophy in the modern era (roughly, Descartes to Kant), are grounded in a number of the intuitions that drive traditional internalism. Second, and perhaps even more importantly, traditional internalism serves as the source of a wide variety of criticisms for other, more prominent, contemporary epistemological views. In a number of debates the views of the traditional internalist are used to play a kind of Devil’s advocate. Finally, some of the ideas that ground traditional internalism have had a bit of resurgence of late.1 To put it simply, traditional internalism refuses to go away. In this introduction, I will do three things. First, I will situate traditional internalism among other competing versions of internalism by highlighting the ways in which traditional internalism differs from them (in particular, evidentialism and conservatism). Second, I will explain more carefully some of the central tenets of traditional internalism and what motivates them. Third, I will highlight some of the difficulties that threaten traditional internalism, and I will explain how the contributions to this volume interact with those difficulties. -/- The goal of this volume is to test again the staying power of traditional internalism, to see if this once historically prominent view deserves another look, to see if traditional internalism is a legitimate contender providing useful criticisms of more prominent views, or, if instead it is time for traditional internalism to be left by the wayside. (shrink)
Truth by convention, once thought to be the foundation of a uniquely promising approach to explaining our access to the truth in nonempirical domains, is nowadays widely considered an absurdity. Its fall from grace has been due largely to the influence of an argument that can be sketched as follows: our linguistic conventions have the power to make it the case that a sentence expresses a particular proposition, but they can’t by themselves generate truth; whether a given proposition is true—and (...) so whether the sentence that expresses it is true—is a matter of what the world is like, which means it isn’t a matter of convention alone. The consensus is that this argument is decisive against truth by convention. Strikingly, though, it has rarely been formulated with much precision. Here I provide a new rendering of the argument, one that reveals its structure and makes transparent just what assumptions it requires, and then I assess conventionalists’ prospects for resisting each of those assumptions. I conclude that the consensus is mistaken: contrary to what is almost universally thought, there remains a promising way forward for the conventionalist project. Along the way, I clarify conventionalists’ commitments by thinking about what truth by convention would need to be like in order for conventionalism to do the epistemological work it’s intended to do. (shrink)
The field of emotion regulation has developed rapidly, and a number of emotion regulatory strategies have been identified. To date, empirical attention has focused on contrasting specific regulatio...
Experimental philosophy is a new interdisciplinary field that uses methods normally associated with psychology to investigate questions normally associated with philosophy. The present review focuses on research in experimental philosophy on four central questions. First, why is it that people's moral judgments appear to influence their intuitions about seemingly nonmoral questions? Second, do people think that moral questions have objective answers, or do they see morality as fundamentally relative? Third, do people believe in free will, and do they see free (...) will as compatible with determinism? Fourth, how do people determine whether an entity is conscious? (shrink)
Why can I not appropriately utter ‘It must be raining’ while standing outside in the rain, even though every world consistent with my knowledge is one in which it is raining? The common response to this problem is to hold that epistemic must, in addition to quantifying over epistemic possibilities, carries some additional evidential information concerning the source of one'S evidence. I argue that this is a mistake: epistemic modals are mere quantifiers over epistemic possibilities. My central claim is that (...) the seeming anomaly of the data above arises from a mistaken conception of what a possibility is. Instead of conceiving of possibilities as possible worlds, I argue that we should conceive of possibilities as answers to open questions. (shrink)
We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, as (...) well as providing sandboxes or workspaces to help various stakeholders build practical wisdom in systems that are sufficiently realistic to aid transferring skills learned to real-world use. The latter are needed for the design of exercises and methods of evaluation within these workspaces, as well as ways of empirically assessing the transfer of wisdom from workspace to world. Systematic interaction between these three disciplines (and others) is the best approach to engineering wisdom for the machine age. (shrink)
Martin Heidegger and Michel Foucault are two of the most important and influential thinkers of the twentieth century. Each has spawned volumes of secondary literature and sparked fierce, polarizing debates, particularly about the relationship between philosophy and politics. And yet, to date there exists almost no work that presents a systematic and comprehensive engagement of the two in relation to one another. _The World of Freedom_ addresses this lacuna. Neither apology nor polemic, the book demonstrates that it is not merely (...) interesting but necessary to read Heidegger and Foucault alongside one another if we are to properly understand the shape of twentieth-century Continental thought. Through close, scholarly engagement with primary texts, Robert Nichols develops original and demanding insights into the relationship between fundamental and historical ontology, modes of objectification and subjectification, and an ethopoetic conception of freedom. In the process, his book also reveals the role that Heidegger's reception in France played in Foucault's intellectual development—the first major work to do so while taking full advantage of the recent publication of Foucault's last Collège de France lectures of the 1980s, which mark a return to classical Greek and Roman philosophy, and thus to familiar Heideggerian loci of concern. (shrink)
In this paper we propose to argue for two claims. The first is that a sizeable group of epistemological projects – a group which includes much of what has been done in epistemology in the analytic tradition – would be seriously undermined if one or more of a cluster of empirical hypotheses about epistemic intuitions turns out to be true. The basis for this claim will be set out in Section 2. The second claim is that, while the jury is (...) still out, there is now a substantial body of evidence suggesting that some of those empirical hypotheses are true. Much of this evidence derives from an ongoing series of experimental studies of epistemic intuitions that we have been conducting. A preliminary report on these studies will be presented in Section 3. In light of these studies, we think it is incumbent on those who pursue the epistemological projects in question to either explain why the truth of the hypotheses does not undermine their projects, or to say why, in light of the evidence we will present, they nonetheless assume that the hypotheses are false. In Section 4, which is devoted to Objections and Replies, we’ll consider some of the ways in which defenders of the projects we are criticizing might reply to our challenge. Our goal, in all of this, is not to offer a conclusive argument demonstrating that the epistemological projects we will be criticizing are untenable. Rather, our aim is to shift the burden of argument. (shrink)
The present volume provides an introduction to the major themes of work in experimental philosophy, bringing together some of the most influential articles in ...
It is an old philosophical idea that if the future self is literally different from the current self, one should be less concerned with the death of the future self. This paper examines the relation between attitudes about death and the self among Hindus, Westerners, and three Buddhist populations. Compared with other groups, monastic Tibetans gave particularly strong denials of the continuity of self, across several measures. We predicted that the denial of self would be associated with a lower fear (...) of death and greater generosity toward others. To our surprise, we found the opposite. Monastic Tibetan Buddhists showed significantly greater fear of death than any other group. The monastics were also less generous than any other group about the prospect of giving up a slightly longer life in order to extend the life of another. (shrink)
"In 2003 the Getty Museum, which holds a collection of about 240 Weston prints, hosted a colloquium on the photographer. This volume in the In Focus series records remarks by the author, Brett Abbott, along with those of six other participants: William Clift, Amy Conger, David Featherstone, Weston Naef, David Travis, and Jennifer Watts. Context for their conversation is provided by the author's introduction, plate texts, and chronology. Approximately fifty of Weston's images demonstrate why his work continues to resonate (...) with a contemporary public and serves as a model for a host of photographers active today."--BOOK JACKET. (shrink)
Metasemantics comprises new work on the philosophical foundations of linguistic semantics, by a diverse group of established and emerging experts in the philosophy of language, metaphysics, and the theory of content. The science of semantics aspires to systematically specify the meanings of linguistic expressions in context. The paradigmatic metasemantic question is accordingly: what more basic or fundamental features of the world metaphysically determine these semantic facts? Efforts to answer this question inevitably raise others, including: where are the boundaries of semantics?; (...) what is the essence of the meaning relation?; which framework should we use for semantic theorizing?; and what are the intrinsic natures of semantic values? Metasemantic inquiry has long been recognized as a central part of the philosophy of language, but recent developments in metaphysics and semantics itself now allow us to approach these classic questions with an unprecedented degree of precision. The essays collected here provide promising new perspectives on old problems, pose questions that suggest novel research projects, and taken together, greatly sharpen our understanding of linguistic representation. (shrink)
In 2003, the concept of precarity emerged as the central organizing platform for a series of social struggles that would spread across the space of Europe. Four years later, almost as suddenly as the precarity movement appeared, so it would enter into crisis. To understand precarity as a political concept it is necessary to go beyond economistic approaches that see social conditions as determined by the mode of production. Such a move requires us to see Fordism as exception and precarity (...) as the norm. The political concept and practice of translation enables us to frame the precarity of creative labour in a broader historical and geographical perspective, shedding light on its contestation and relation to the concept of the common. Our interest is in the potential for novel forms of connection, subjectivization and political organization. Such processes of translation are themselves inherently precarious, transborder undertakings. (shrink)
It is commonly held that the context with respect to which an indexical is interpreted is determined independently of the interpretation of the indexical. This view, which I call Context Realism, has explanatory significance: it is because the context is what it is that an indexical refers to what it does. In this paper, I provide an argument against Context Realism. I then develop an alternative that I call Context Constructivism, according to which indexicals are defined not in terms of (...) features of utterance situations, but rather in terms of roles that objects could play. (shrink)
While research has focused on why certain entrepreneurs elect to create innovative solutions to social problems, very little is known about why some social entrepreneurs choose to scale their solutions while others do not. Research on scaling has generally focused on organizational characteristics often overlooking factors at the individual level that may affect scaling decisions. Drawing on the multidimensional construct of moral intensity, we propose a theoretical model of ethical decision making to explain why a social entrepreneur’s perception of moral (...) intensity of the social problem, coupled with their personal desire for control, can significantly influence scaling decisions. Specifically, we propose that higher levels of perceived moral intensity will positively influence the likelihood of scaling through open as opposed to closed modes in order to achieve greater speed and scope of social impact. However, we also propose this effect will be negatively moderated by a social entrepreneur’s higher levels of desire for control. Our model has implications for research and practice at the interface of ethics and social entrepreneurship. (shrink)
The literature contains evidence from some studies of asymmetric patterns of choice cycles in the direction consistent with regret theory, and evidence from other studies of asymmetries in the opposite direction. This article reports an experiment showing that both patterns occur within the same sample of respondents operating in the same experimental environment. We discuss the implications for modelling behaviour in such environments.
There are few scientists of the twentieth century whose life's work has created more excitement and controversy than that of physicist David Bohm . For the first time in a single volume, The Essential David Bohm offers a comprehensive overview of Bohm's original works from a non-technical perspective. Including three chapters of previously unpublished material, and a forward by the Dalai Lama, each reading has been selected to highlight some aspect of the implicate order process, and to provide an introduction (...) to one of the most provocative thinkers of our time. (shrink)
Theories of reference have been central to analytic philosophy, and two views, the descriptivist view of reference and the causal-historical view of reference, have dominated the field. In this research tradition, theories of reference are assessed by consulting one’s intuitions about the reference of terms in hypothetical situations. However, recent work in cultural psychology (e.g., Nisbett et al. 2001) has shown systematic cognitive differences between East Asians and Westerners, and some work indicates that this extends to intuitions about philosophical cases (...) (Weinberg et al. 2001). In light of these findings on cultural differences, two experiments were conducted which explored intuitions about reference in Westerners and East Asians. Both experiments indicate that, for certain central cases, Westerners are more likely than East Asians to report intuitions that are consistent with the causal-historical view. These results constitute prima facie evidence that semantic intuitions vary from culture to culture, and the paper argues that this fact raises questions about the nature of the philosophical enterprise of developing a theory of reference. (shrink)
One recent topic of debate in Bayesian epistemology has been the question of whether imprecise credences can be rational. I argue that one account of imprecise credences, the orthodox treatment as defended by James M. Joyce, is untenable. Despite Joyce’s claims to the contrary, a puzzle introduced by Roger White shows that the orthodox account, when paired with Bas C. van Fraassen’s Reflection Principle, can lead to inconsistent beliefs. Proponents of imprecise credences, then, must either provide a compelling reason to (...) reject Reflection or admit that the rational credences in White’s case are precise. (shrink)
The precautionary principle has been widely discussed in the academic, legal, and policy arenas. This paper argues, however, that there is no single precautionary principle and we should stop referring to ?the? precautionary principle. Instead, we should talk about ?precaution? and ?precautionary approaches? more generally and identify and defend distinct precautionary principles of limited scope. Drawing on the vast literature on ?the? precautionary principle, this paper further argues that the challenges of decision making under conditions of uncertainty necessitate taking a (...) precautionary approach to decision making that will enable us to understand what particular precautionary principles require of us on a case-by-case basis. (shrink)
Solar radiation management is a form of geoengineering that involves the intentional manipulation of solar radiation with the aim of reducing global average temperature. This paper explores what precaution implies about the status of solar radiation management. It is argued that any form of solar radiation management that poses threats of catastrophe cannot constitute an appropriate precautionary measure against another threat of catastrophe, namely climate change. Research of solar radiation management is appropriate on a precautionary view only insofar as such (...) research aims to identify whether any forms of solar radiation management could be implemented without creating new or exacerbating existing threats of catastrophe. (shrink)
The everyday capacity to understand the mind, or 'mindreading', plays an enormous role in our ordinary lives. Shaun Nichols and Stephen Stich provide a detailed and integrated account of the intricate web of mental components underlying this fascinating and multifarious skill. The imagination, they argue, is essential to understanding others, and there are special cognitive mechanisms for understanding oneself. The account that emerges has broad implications for longstanding philosophical debates over the status of folk psychology. Mindreading is another trailblazing volume (...) in the prestigious interdisciplinary Oxford Cognitive Science series. (shrink)