With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore a corresponding opportunity cost: a potential good, lives worth living, is not being realized. Given some plausible assumptions, this cost is extremely large. However, the lesson for standard utilitarians is not that we ought to maximize the pace of technological (...) development, but rather that we ought to maximize its safety, i.e. the probability that colonization will eventually occur. This goal has such high utility that standard utilitarians ought to focus all their efforts on it. Utilitarians of a ‘person-affecting’ stripe should accept a modified version of this conclusion. Some mixed ethical views, which combine utilitarian considerations with other criteria, will also be committed to a similar bottom line. (shrink)
In this exchange, Peter Coghlan and Nick Trakakis discuss the problem of natural evil in the light of the recent Asian tsunami disaster. The exchange begins with an extract from a newspaper article written by Coghlan on the tsunami, followed by three rounds of replies and counter-replies, and ending with some final comments from Trakakis. While critical of any attempt to show that human life is good overall despite its natural evils, Coghlan argues that instances of natural evil, even (...) horrific ones, can be justified as the unavoidable by-product of a natural system on which human life and culture depends. Trakakis, however, rejects this view, counselling instead a degree of skepticism about our ability to construct a plausible theodicy for horrific evil. (shrink)
To what extent should we use technological advances to try to make better human beings? Leading philosophers debate the possibility of enhancing human cognition, mood, personality, and physical performance, and controlling aging. Would this take us beyond the bounds of human nature? These are questions that need to be answered now.
In chapters ranging from "The Beautiful, the Dainty, and the Dumpy" to "Skin-deep or In the Eye of the Beholder?" Nick Zangwill investigates the nature of beauty as we conceive it, and as it is in itself. The notion of beauty is currently attracting increased interest, particularly in philosophical aesthetics and in discussions of our experiences and judgments about art. In The Metaphysics of Beauty, Zangwill argues that it is essential to beauty that it depends on the ordinary features (...) of things. He uses this principle to defend the notion of the aesthetic, to call for a version of aesthetic formalism, and to reconsider the reality of beauty. The Metaphysics of Beauty brings beauty to the center of intellectual consciousness in a manner informed by contemporary metaphysics and engages with beauty as an enduring object of human thought and experience. (shrink)
Darwinian matters : life, force and change -- Biological difference -- The evolution of sex and race -- Nietzsche's Darwin -- History and the untimely -- The eternal return and the overman -- Bergsonian differences -- The philosophy of life -- Intuition and the virtual -- The future.
This book explores both the embodied nature of social life and the social nature of human bodily life. It provides an accessible review of the contemporary social science debates on the body, and develops a coherent new perspective. Nick Crossley critically reviews the literature on mind and body, and also on the body and society. He draws on theoretical insights from the work of Gilbert Ryle, Maurice Merleau-Ponty, George Herbert Mead and Pierre Bourdieu, and shows how the work of (...) these writers overlaps in interesting and important ways which, when combined, provide the basis for a persuasive and robust account of human embodiment. The Social Body provides a timely review of the theoretical approaches to the sociology of the body. It offers new insights, and a coherent new perspective on the body. It will be valuable reading for students and academics in sociology, philosophy, anthropology, psychology, and cultural studies. (shrink)
Our paradigms of aesthetic value condition the philosophical questions we pose and hope to answer about it. Theories of aesthetic value are typically individualistic, in the sense that the paradigms they are designed to capture, and the questions to which they are offered as answers, center the individual’s engagement with aesthetic value. Here I offer some considerations that suggest that such individualism is a mistake and sketch a communitarian way of posing and answering questions about the nature of aesthetic value.
This article responds to recent debates in critical algorithm studies about the significance of the term “algorithm.” Where some have suggested that critical scholars should align their use of the term with its common definition in professional computer science, I argue that we should instead approach algorithms as “multiples”—unstable objects that are enacted through the varied practices that people use to engage with them, including the practices of “outsider” researchers. This approach builds on the work of Laura Devendorf, Elizabeth Goodman, (...) and Annemarie Mol. Different ways of enacting algorithms foreground certain issues while occluding others: computer scientists enact algorithms as conceptual objects indifferent to implementation details, while calls for accountability enact algorithms as closed boxes to be opened. I propose that critical researchers might seek to enact algorithms ethnographically, seeing them as heterogeneous and diffuse sociotechnical systems, rather than rigidly constrained and procedural formulas. To do so, I suggest thinking of algorithms not “in” culture, as the event occasioning this essay was titled, but “as” culture: part of broad patterns of meaning and practice that can be engaged with empirically. I offer a set of practical tactics for the ethnographic enactment of algorithmic systems, which do not depend on pinning down a singular “algorithm” or achieving “access,” but which rather work from the partial and mobile position of an outsider. (shrink)
This article argues that there can be epistemic dilemmas: situations in which one faces conflicting epistemic requirements with the result that whatever one does, one is doomed to do wrong from the epistemic point of view. Accepting this view, I argue, may enable us to solve several epistemological puzzles.
Numerous approaches to a quantum theory of gravity posit fundamental ontologies that exclude spacetime, either partially or wholly. This situation raises deep questions about how such theories could relate to the empirical realm, since arguably only entities localized in spacetime can ever be observed. Are such entities even possible in a theory without fundamental spacetime? How might they be derived, formally speaking? Moreover, since by assumption the fundamental entities cannot be smaller than the derived and so cannot ‘compose’ them in (...) any ordinary sense, would a formal derivation actually show the physical reality of localized entities? We address these questions via a survey of a range of theories of quantum gravity, and generally sketch how they may be answered positively. (shrink)
A dizzying trip through the mind(s) of the provocative and influential thinker Nick Land. During the 1990s British philosopher Nick Land's unique work, variously described as “rabid nihilism,” “mad black deleuzianism,” and “cybergothic,” developed perhaps the only rigorous and culturally-engaged escape route out of the malaise of “continental philosophy” —a route that was implacably blocked by the academy. However, Land's work has continued to exert an influence, both through the British “speculative realist” philosophers who studied with him, and (...) through the many cultural producers—writers, artists, musicians, filmmakers—who have been invigorated by his uncompromising and abrasive philosophical vision. Beginning with Land's early radical rereadings of Heidegger, Nietzsche, Kant and Bataille, the volume collects together the papers, talks and articles of the mid-90s—long the subject of rumour and vague legend (including some work which has never previously appeared in print)—in which Land developed his futuristic theory-fiction of cybercapitalism gone amok; and ends with his enigmatic later writings in which Ballardian fictions, poetics, cryptography, anthropology, grammatology and the occult are smeared into unrecognisable hybrids. Fanged Noumena gives a dizzying perspective on the entire trajectory of this provocative and influential thinker's work, and has introduced his unique voice to a new generation of readers. (shrink)
Subjects of ectogenesis—human beings that are developing in artificial wombs (AWs)—share the same moral status as newborns. To demonstrate this, I defend two claims. First, subjects of partial ectogenesis—those that develop in utero for a time before being transferred to AWs—are newborns (in the full sense of the word). Second, subjects of complete ectogenesis—those who develop in AWs entirely—share the same moral status as newborns. To defend the first claim, I rely on Elizabeth Chloe Romanis’s distinctions between fetuses, newborns and (...) subjects of ectogenesis. For Romanis, the subject of partial ectogenesis ‘is neither a fetus nor a baby’ but is, instead, a ‘new product of human reproduction’. In this essay, I begin by, expanding upon Romanis’s argument that subjects of partial ectogenesis are not fetuses while arguing that those subjects are newborns. Next, I show that the distinction that Romanis draws between subjects of partial ectogenesis and newborns needs to be revised. The former is a kind of the latter. This leads us to an argument that shows why different moral statuses cannot be justifiably assigned to subjects of partial ectogenesis and subjects of complete ectogenesis, respectively. As subjects of partial ectogenesis share the same moral status as newborns, it follows that subjects of complete ectogenesis share the same moral status as newborns as well. I conclude by considering implications that this essay may have for the research and development of AW technology and conceptual links between a subject’s moral status and birth. (shrink)
Articulate and perceptive, Intersubjectivity is a text that explains the notions of intersubjectivity as a central concern of philosophy, sociology, psychology, and politics. Going beyond this broad-ranging introduction and explication, author Nick Crossley provides a critical discussion of intersubjectivity as an interdisciplinary concept to shed light on our understanding of selfhood, communication, citizenship, power, and community. The volume traces the contributions of key thinkers engaged within the intersubjectivist tradition, including Husserl, Buber, Kojeve, Merlau-Ponty, Mead, Wittgenstein, Schutz, and Habermas. A (...) clear, concise introduction to a range of difficult concepts and thinkers, Intersubjectivity demystifies this very interdisciplinary subject for advanced and graduate-level students of philosophy, sociology, social psychology, and social and political theory. (shrink)
This is a chapter of the planned monograph "Out of Nowhere: The Emergence of Spacetime in Quantum Theories of Gravity", co-authored by Nick Huggett and Christian Wüthrich and under contract with Oxford University Press. (More information at www<dot>beyondspacetime<dot>net.) This chapter investigates the meaning and significance of string theoretic dualities, arguing they reveal a surprising physical indeterminateness to spacetime.
Apologies can be profoundly meaningful, yet many gestures of contrition - especially those in legal contexts - appear hollow and even deceptive. Discussing numerous examples from ancient and recent history, I Was Wrong argues that we suffer from considerable confusion about the moral meanings and social functions of these complex interactions. Rather than asking whether a speech act 'is or is not' an apology, Smith offers a highly nuanced theory of apologetic meaning. Smith leads us though a series of rich (...) philosophical and interdisciplinary questions, explaining how apologies have evolved from a confluence of diverse cultural and religious practices that do not translate easily into secular discourse or gender stereotypes. After classifying several varieties of apologies between individuals, Smith turns to apologies from collectives. Although apologies from corporations, governments, and other groups can be quite meaningful in certain respects, we should be suspicious of those that supplant apologies from individual wrongdoers. (shrink)
'The Probabilistic Mind' is a follow-up to the influential and highly cited 'Rational Models of Cognition'. It brings together developments in understanding how, and how far, high-level cognitive processes can be understood in rational terms, and particularly using probabilistic Bayesian methods.
This paper is about epistemic dilemmas, i.e., cases in which one is doomed to have a doxastic attitude that is rationally impermissible no matter what. My aim is to develop and defend a position according to which there can be genuine rational indeterminacy; that is, it can be indeterminate which principles of rationality one should satisfy and thus indeterminate which doxastic attitudes one is permitted or required to have. I am going to argue that this view can resolve epistemic dilemmas (...) in a systematic way while also enjoying some important advantages over its rivals. (shrink)
Cognitive enhancement takes many and diverse forms. Various methods of cognitive enhancement have implications for the near future. At the same time, these technologies raise a range of ethical issues. For example, they interact with notions of authenticity, the good life, and the role of medicine in our lives. Present and anticipated methods for cognitive enhancement also create challenges for public policy and regulation.
The terms "imagination'' and "imaginative'' can be readily applied to a profusion of attitudes, experiences, activities, and further phenomena. The heterogeneity of the things to which they're applied prompts the thoughts that the terms are polysemous, and that there is no single, coherent, fruitful conception of imagination to be had. Nonetheless, much recent work on imagination ascribes implicitly to a univocal way of thinking about imaginative phenomena: the imitation theory, according to which imaginative experiences imitate other experiences. This approach is (...) infelicitous. It issues in unhelpful descriptions of imaginative activities, experiences, and attitudes, and frustrates theorizing about imagination's applications and intensional characteristics. A better way of thinking about imagination is the lens theory, according to which the imagination is a set of ways to focus, refine, clarify or concentrate the matter of other experiences. This approach offers better characterizations of imaginative phenomena, and promises brighter theoretical illumination of them. (shrink)
_Anthropic Bias_ explores how to reason when you suspect that your evidence is biased by "observation selection effects"--that is, evidence that has been filtered by the precondition that there be some suitably positioned observer to "have" the evidence. This conundrum--sometimes alluded to as "the anthropic principle," "self-locating belief," or "indexical information"--turns out to be a surprisingly perplexing and intellectually stimulating challenge, one abounding with important implications for many areas in science and philosophy. There are the philosophical thought experiments and paradoxes: (...) the Doomsday Argument; Sleeping Beauty; the Presumptuous Philosopher; Adam & Eve; the Absent-Minded Driver; the Shooting Room. And there are the applications in contemporary science: cosmology ; evolutionary theory ; the problem of time's arrow ; quantum physics ; game-theory problems with imperfect recall ; even traffic analysis. _Anthropic Bias_ argues that the same principles are at work across all these domains. And it offers a synthesis: a mathematically explicit theory of observation selection effects that attempts to meet scientific needs while steering clear of philosophical paradox. (shrink)
This paper investigates the significance of T-duality in string theory: the indistinguisha- bility with respect to all observables, of models attributing radically different radii to space – larger than the observable universe, or far smaller than the Planck length, say. Two interpretational branch points are identified and discussed. First, whether duals are physically equivalent or not: by considering a duality of the familiar simple harmonic oscillator, I argue that they are. Unlike the oscillator, there are no measurements ‘outside’ string theory (...) that could distinguish the duals. Second, whether duals agree or disagree on the radius of ‘target space’, the space in which strings evolve according to string theory. I argue for the latter position, because the alternative leaves it unknown what the radius is. Since duals are physically equivalent yet disagree on the radius of target space, it follows that the radius is indeterminate between them. Using an analysis of Brandenberger and Vafa (1989), I explain why – even so – space is observed to have a determinate, large radius. The conclusion is that observed, ‘phenomenal’ space is not target space, since a space cannot have both a determinate and indeterminate radius: instead phenomenal space must be a higher-level phenomenon, not fundamental. (shrink)
Contemporary philosophical attitudes toward beauty are hard to reconcile with its importance in the history of philosophy. Philosophers used to allow it a starring role in their theories of autonomy, morality, or the good life. But today, if beauty is discussed at all, it is often explicitly denied any such importance. This is due, in part, to the thought that beauty is the object of “disinterested pleasure”. In this paper I clarify the notion of disinterest and develop two general strategies (...) for resisting the emphasis on it, in the hopes of getting a clearer view of beauty’s significance. I present and discuss several literary depictions of the encounter with beauty that motivate both strategies. These depictions illustrate the ways in which aesthetic experience can be personally transformative. I argue that they present difficulties for disinterest theories and suggest we abandon the concept of disinterest to focus instead on the special kind of interest beauty fuels. I propose a closer look at the Platonic thought that beauty is the object of love. (shrink)
Daniel Greco (forthcoming) argues that there cannot be epistemic dilemmas. I argue that he is wrong. I then look in detail at a would-be epistemic dilemma and argue that no non-dilemmic approach to it can be made to work. Along the way, there is discussion of octopuses, lobsters, and other ‘inscrutable cognizers’; the relationship between evaluative and prescriptive norms; a failed attempt to steal a Brueghel; epistemic and moral blame and residue; an unbearable guy who thinks he’s God’s gift to (...) women; excuses; stupid games involving hats; radical permissivism; how I’ll never be able to afford to buy a house in Hampstead; and many other exciting topics. (shrink)
Remarkable progress in the mathematics and computer science of probability has led to a revolution in the scope of probabilistic models. In particular, ‘sophisticated’ probabilistic methods apply to structured relational systems such as graphs and grammars, of immediate relevance to the cognitive sciences. This Special Issue outlines progress in this rapidly developing field, which provides a potentially unifying perspective across a wide range of domains and levels of explanation. Here, we introduce the historical and conceptual foundations of the approach, explore (...) how the approach relates to studies of explicit probabilistic reasoning, and give a brief overview of the field as it stands today. (shrink)
ABSTRACTThis paper compares Margaret Archer’s morphogenetic critical realism and Michel Foucault’s implicit discursive realism. It argues that there is a surprisingly high degree of correspondence between the two social ontologies. Specifically, both ontologies suggest that there are three largely autonomous domains in operation: cultural, structural, and agentive. Yet, while each of these domains have a level of independence, yet they are also partially constituted by the content and form of the others. This paper discusses the potential to integrate the two (...) approaches in such a way as to overcome their respective short-comings, namely the underdevelopment of culture in Archer’s ontology, and the underdevelopment of social agency in Foucault’s ontology. (shrink)
Views on addiction are often polarised - either addiction is a matter of choice, or addicts simply can't help themselves. But perhaps addiction falls between the two? This book contains views from philosophy, neuroscience, psychiatry, psychology, and the law exploring this middle ground between free choice and no choice.
Conventional sacrificial moral dilemmas propose directly causing some harm to prevent greater harm. Theory suggests that accepting such actions (consistent with utilitarian philosophy) involves more reflective reasoning than rejecting such actions (consistent with deontological philosophy). However, past findings do not always replicate, confound different kinds of reflection, and employ conventional sacrificial dilemmas that treat utilitarian and deontological considerations as opposite. In two studies, we examined whether past findings would replicate when employing process dissociation to assess deontological and utilitarian inclinations independently. (...) Findings suggested two categorically different impacts of reflection: measures of arithmetic reflection, such as the Cognitive Reflection Test, predicted only utilitarian, not deontological, response tendencies. However, measures of logical reflection, such as performance on logical syllogisms, positively predicted both utilitarian and deontological tendencies. These studies replicate some findings, clarify others, and reveal opportunity for additional nuance in dual process theorist’s claims about the link between reflection and dilemma judgments. (shrink)
According to the additive view of sensory imagination, mental imagery often involves two elements. There is an image-like element, which gives the experiences qualitative phenomenal character akin to that of perception. There is also a non-image element, consisting of something like suppositions about the image's object. This accounts for extra- sensory features of imagined objects and situations: for example, it determines whether an image of a grey horse is an image of Desert Orchid, or of some other grey horse. The (...) view promises to give a simple and intuitive explanation of some puzzling features of imagination, and, further, to illuminate imagination 's relation to modal knowledge. I contend that the additive view does not fulfil these two promises. The explanation of how images come to be determinate is redundant: the content constituting the indeterminate mental images on which the view relies is sufficient to deliver determinate images too, so the extra resources offered by the view are not required.. (shrink)
This paper investigates the determinants of regulatory compliance in corporate organizations. Exploiting a unique enforcement and reporting framework for insider trading in Italy, we present three main findings. First, board governance, such as chief executive–chairman duality and the proportion of non-executive directors, does not increase the propensity of firms to comply with regulation. Second, family firms and firms with a high degree of separation of ownership from control are most likely to comply with regulation. Third, corporate ethos is more important (...) in predicting regulatory compliance than explicit corporate governance structures. (shrink)
Social networking sites have challenged ethical issues about users’ information security and privacy. SNS users are concerned about their privacy and need to control the information they share and its use. This paper examines the security of SNS by taking a look at the influence of users’ perceived control of information over their information-sharing behaviors. Employing an empirical study, this paper demonstrates the importance of perceived control in SNS users’ information-sharing behaviors. Specifically, perceived control has been found to be negatively (...) related to perceived privacy risk and attitude toward information sharing, which in turn has an impact on their information-sharing behaviors. In addition, gender has been shown to be an important factor that moderates the influences of both perceived control and perceived privacy risk on SNS users’ attitudes toward information sharing. Theoretical and practical implications are discussed. (shrink)
Positions on the ethics of human enhancement technologies can be (crudely) characterized as ranging from transhumanism to bioconservatism. Transhumanists believe that human enhancement technologies should be made widely available, that individuals should have broad discretion over which of these technologies to apply to themselves, and that parents should normally have the right to choose enhancements for their children-to-be. Bioconservatives (whose ranks include such diverse writers as Leon Kass, Francis Fukuyama, George Annas, Wesley Smith, Jeremy Rifkin, and Bill McKibben) are generally (...) opposed to the use of technology to modify human nature. A central idea in bioconservativism is that human enhancement technologies will undermine our human dignity. To forestall a slide down the slippery slope towards an ultimately debased ‘posthuman’ state, bioconservatives often argue for broad bans on otherwise promising human enhancements. This paper distinguishes two common fears about the posthuman and argues for the importance of a concept of dignity that is inclusive enough to also apply to many possible posthuman beings. Recognizing the possibility of posthuman dignity undercuts an important objection against human enhancement and removes a distortive double standard from our field of moral vision. (shrink)
What is it to know more? By what metric should the quantity of one's knowledge be measured? I start by examining and arguing against a very natural approach to the measure of knowledge, one on which how much is a matter of how many. I then turn to the quasi-spatial notion of counterfactual distance and show how a model that appeals to distance avoids the problems that plague appeals to cardinality. But such a model faces fatal problems of its own. (...) Reflection on what the distance model gets right and where it goes wrong motivates a third approach, which appeals not to cardinality, nor to counterfactual distance, but to similarity. I close the paper by advocating this model and briefly discussing some of its significance for epistemic normativity. In particular, I argue that the 'trivial truths' objection to the view that truth is the goal of inquiry rests on an unstated, but false, assumption about the measure of knowledge, and suggest that a similarity model preserves truth as the aim of belief in an intuitively satisfying way. (shrink)
Epistemologists often appeal to the idea that a normative theory must provide useful, usable, guidance to argue for one normative epistemology over another. I argue that this is a mistake. Guidance considerations have no role to play in theory choice in epistemology. I show how this has implications for debates about the possibility and scope of epistemic dilemmas, the legitimacy of idealisation in Bayesian epistemology, uniqueness versus permissivism, sharp versus mushy credences, and internalism versus externalism.
The received view of implicit bias holds that it is associative and unreflective. Recently, the received view has been challenged. Some argue that implicit bias is not predicated on “any” associative process, but it is unreflective. These arguments rely, in part, on debiasing experiments. They proceed as follows. If implicit bias is associative and unreflective, then certain experimental manipulations cannot change implicitly biased behavior. However, these manipulations can change such behavior. So, implicit bias is not associative and unreflective. This paper (...) finds philosophical and empirical problems with that argument. When the problems are solved, the conclusion is not quite right: implicit bias is not necessarily unreflective, but it seems to be associative. Further, the paper shows that even if legitimate non-associative interventions on implicit bias exist, then both the received view and its recent contender would be false. In their stead would be interactionism or minimalism about implicit bias. (shrink)
How should we pursue aesthetic value, or incorporate it into our lives, if we want to? Is there an ideal of aesthetic life? Philosophers have proposed numerous answers to the analogous question in moral philosophy, but the aesthetic question has received relatively little attention. There is, in essence, a single view, which is that one should develop a sensibility that would give one sweeping access to aesthetic value. I challenge this view on two grounds. First, it threatens to undermine our (...) "aesthetic love", or the meaningful attachments we form with aesthetic items, e.g., poems, paintings, songs, or items of design and dress. Second, it fails to accommodate the motivational character of our encounter with beauty, which can diminish our desire to pursue the wider world of aesthetic value. I conclude that whatever the aesthetic ideal is, it must reconcile our desire to broaden our access to aesthetic value with our desire to maintain and cultivate our meaningful aesthetic attachments. I motivate the alternative thought that having style is the aesthetic ideal. (shrink)
With the increasing popularity of social media, a new ethics debate has arisen over marketing and technology in the current digital era. People are using online communities but they have concern about information credibility through word of mouth in these platforms. Social media is becoming increasingly influential in shaping individuals’ decision-making as more and better quality information about products is made available. In this research, a social word-of-mouth model proposes using a survey to test the model in a popular travel (...) community. The model highlights the role of social media and social support in social networking sites, identifying increasing credibility and information usefulness resulting in an ethical environment to adopt word of mouth. The theoretical and practical implications of the study are both detailed. (shrink)
A pervasive and influential argument appeals to trivial truths to demonstrate that the aim of inquiry is not the acquisition of truth. But the argument fails, for it neglects to distinguish between the complexity of the sentence used to express a truth and the complexity of the truth expressed by a sentence.
Analytic moral philosophers have generally failed to engage in any substantial way with the cultural history of morality. This is a shame, because a genealogy of morals can help us accomplish two important tasks. First, a genealogy can form the basis of an epistemological project, one that seeks to establish the epistemic status of our beliefs or values. Second, a genealogy can provide us with functional understanding, since a history of our beliefs, values or institutions can reveal some inherent dynamic (...) or pattern which may be problematically obscured from our view. In this paper, I try to make good on these claims by offering a sketchy genealogy of emancipatory values, or values which call for the liberation of persons from systems of dominance and oppression. The real history of these values, I argue, is both epistemologically vindicatory and functionally enlightening. (shrink)