Gauge symmetries provide one of the most puzzling examples of the applicability of mathematics in physics. The presented work focuses on the role of analogical reasoning in the gauge argument, motivated by Mark Steiner's claim that the application of the gauge principle relies on a Pythagorean analogy whose success undermines naturalist philosophy. In this paper, we present two different views concerning the analogy between gravity, electromagnetism, and nuclear interactions, each providing a different philosophical response to the problem of the applicability (...) of mathematics in the natural sciences. The first is based on an account of Weyl's original work, which first gave rise to the gauge principle. Drawing on his later philosophical writings, we develop an idelaist reading of the mathematical analogies in the gauge argument. On this view, mathematical analogies serve to ensure a conceptual harmony in our scientific account of nature. We further discuss the construction of Yang and Mills's gauge theory in light of this idealist reading. The second account presents a naturalist alternative, formulated in terms of John Norton's account of a material analogy, according to which the analogy succeeds in virtue of a physical similarity between the different interactions. This account is based on the methodological equivalence principle, a simple conceptual extension of the gauge principle that allows us to understand the relation between coordinate transformations and gravity as a manifestation of the same method. The physical similarity between the different cases is based on attributing the success of this method to the dependence of the coupling on relational physical quantities. We conclude by reflecting on the advantages and limits of the idealist, naturalist, and anthropocentric Pythagorean views, as three alternative ways to understand the puzzling relation between mathematics and physics. -/- . (shrink)
This article suggests a fresh look at gauge symmetries, with the aim of drawing a clear line between the a priori theoretical considerations involved, and some methodological and empirical non-deductive aspects that are often overlooked. The gauge argument is primarily based on a general symmetry principle expressing the idea that a change of mathematical representation should not change the form of the dynamical law. In addition, the ampliative part of the argument is based on the introduction of new degrees of (...) freedom into the theory according to a methodological principle that is formulated here in terms of correspondence between passive and active transformations. To demonstrate how the two kinds of considerations work together in a concrete context, I begin by considering spatial symmetries in mechanics. I suggest understanding Mach's principle as a similar combination of theoretical, methodological and empirical considerations, and demonstrate the claim with a simple toy model. I then examine gauge symmetries as a manifestation of the two principles in a quantum context. I further show that in all of these cases the relational nature of physically significant quantities can explain the relevance of the symmetry principle and the way the methodology is applied. In the quantum context, the relevant relational variables are quantum phases. (shrink)
Protective measurements illustrate how Yakir Aharonov's fundamental insights into quantum theory yield new experimental paradigms that allow us to test quantum mechanics in ways that were not possible before. As for quantum theory itself, protective measurements demonstrate that a quantum state describes a single system, not only an ensemble of systems, and reveal a rich ontology in the quantum state of a single system. We discuss in what sense protective measurements anticipate the theorem of Pusey, Barrett, and Rudolph (PBR), stating (...) that, if quantum predictions are correct, then two distinct quantum states cannot represent the same physical reality. (shrink)
This paper formulates generalized versions of the general principle of relativity and of the principle of equivalence that can be applied to general abstract spaces. It is shown that when the principles are applied to the Hilbert space of a quantum particle, its law of coupling to electromagnetic fields is obtained. It is suggested to understand the Aharonov-Bohm effect in light of these principles, and the implications for some related foundational controversies are discussed.
I address the recent debate between Meehan and Vaidman concerning the claim made by the former for a new problem to quantum mechanics. I argue that while Meehan's incompatibility claim does hold in the situation he presents, it does not genuinely involve considerations that can limit quantum state preparation, nor does it introduce new constrains over possible interpretations of quantum theory.
The paper portrays the influence of major philosophical ideas on the 1935 debates on quantum theory that reached their climax in the paper by Einstein, Podosky and Rosen, and describes the relevance of these ideas to the vast impact of the paper. I claim that the focus on realism in many common descriptions of the debate misses important aspects both of Einstein's and Bohr's thinking. I suggest an alternative understanding of Einstein's criticism of quantum mechanics as a manifestation of the (...) same methodological principles that served him in the construction of the special and the general theories of relativity. These principles address, in a very specific way, the relation of the theoretical mathematical representations to the represented physical systems. These ideas, I claim, played a key role in the influence of the paper on later works that changed our understanding of quantum theory despite the rejection of EPR's central conclusion. (shrink)
Gauge symmetries play a central role, both in the mathematical foundations as well as the conceptual construction of modern (particle) physics theories. However, it is yet unclear whether they form a necessary component of theories, or whether they can be eliminated. It is also unclear whether they are merely an auxiliary tool to simplify (and possibly localize) calculations or whether they contain independent information. Therefore their status, both in physics and philosophy of physics, remains to be fully clarified. In this (...) overview we review the current state of affairs on both the philosophy and the physics side. In particular, we focus on the circumstances in which the restriction of gauge theories to gauge invariant information on an observable level is warranted, using the Brout-Englert-Higgs theory as an example of particular current importance. Finally, we determine a set of yet to be answered questions to clarify the status of gauge symmetries. (shrink)
Determinism is a spectre that has haunted our scientifically-oriented culture from the beginning. I happen to think that it is literally a ‘spectre’, a trick of the vision, an appearance with an internal cause only, and that it is no more than the ghost of our own conceptual determinations projected outward into a world in which it has no place and no proper being. From one point of view it is no more than an alienated fantasy involving a number of (...) incoherent assumptions. Of these, one of the most important, and one of the most deeply eroded by much contemporary work, is the assumption that science and scientific understanding is a potentially completable system. From another point of view, however, the deterministic picture seems an inevitable product of scientific activity. (shrink)
There are two main claims that Bradley makes concerning negative judgment in the Principles of Logic : Negative judgment ‘stands at a different level of reflection’ from affirmative judgment. Negative judgment ‘presupposes a positive ground’. I will consider what Bradley means by these claims, and draw comparisons with Wittgenstein's views on negation as they developed between the Tractatus and the Philosophical Remarks.
El filósofo francés Alain Guy (La Rochelle, 1918 - Narbonne, 1998) dedicó por entero su vida al estudio de la filosofía española e hispanoamericana, dándola a conocer no sólo en el extranjero sino también en nuestro país.
In the early years of this century the debate as to the nature of judgment was a central issue dividing British philosophers. What a philosopher said about judgment was not independent of what he said about perception, the distinction between the a priori and empirical, the distinction between external and internal relations, the nature of inference, truth, universals, language, the reality of the self and so on.
Evolutionary debunking arguments are arguments that appeal to the evolutionary origins of evaluative beliefs to undermine their justification. This paper aims to clarify the premises and presuppositions of EDAs—a form of argument that is increasingly put to use in normative ethics. I argue that such arguments face serious obstacles. It is often overlooked, for example, that they presuppose the truth of metaethical objectivism. More importantly, even if objectivism is assumed, the use of EDAs in normative ethics is incompatible with a (...) parallel and more sweeping global evolutionary debunking argument that has been discussed in recent metaethics. After examining several ways of responding to this global debunking argument, I end by arguing that even if we could resist it, this would still not rehabilitate the current targeted use of EDAs in normative ethics given that, if EDAs work at all, they will in any case lead to a truly radical revision of our evaluative outlook. (shrink)
Well-being occupies a central role in ethics and political philosophy, including in major theories such as utilitarianism. It also extends far beyond philosophy: recent studies into the science and psychology of well-being have propelled the topic to centre stage, and governments spend millions on promoting it. We are encouraged to adopt modes of thinking and behaviour that support individual well-being or 'wellness'. What is well-being? Which theories of well-being are most plausible? In this rigorous and comprehensive introduction to the topic, (...) Guy Fletcher unpacks and assesses these questions and many more, including: Are pleasure and pain the only things that affect well-being? Is desire-fulfilment the only thing that makes our lives go well? Can something be good for someone who does not desire it? Is well-being fundamentally connected to a distinctive human nature? Is happiness all that makes our lives go well? Is death necessarily bad for us? How is the well-being of a whole life related to well-being at particular times? Also included is a glossary of key terms, and annotated further reading and study and comprehension questions follow each chapter, making _The Philosophy of Well-Being_ essential reading for students in ethics and political philosophy, and also suitable for those in related disciplines such as psychology, politics and sociology. (shrink)
Philosophers have long theorized about what makes people's lives go well, and why, and the extent to which morality and self-interest can be reconciled. However, we have spent little time on meta-prudential questions, questions about prudential discourse—thought and talk about what is good and bad for us; what contributes to well-being; and what we have prudential reason, or prudentially ought, to do. This situation is surprising given that prudence is, prima facie, a normative form of discourse and cries out for (...) further investigation of what it is like and whether it has problematic commitments. It also marks a stark contrast from moral discourse, about which there has been extensive theorizing, in meta-ethics. -/- Dear Prudence: The Nature and Normativity of Prudential Discourse has three broad aims. Firstly, Guy Fletcher explores the nature of prudential discourse. Secondly, he argues that prudential discourse is normative and authoritative, like moral discourse. Thirdly, Fletcher aims to show that prudential discourse is worthy of further, explicit, attention both due to its intrinsic interest but also for the light it sheds on the meta-normative more broadly. (shrink)
So-called theories of well-being (prudential value, welfare) are under-represented in discussions of well-being. I do four things in this article to redress this. First, I develop a new taxonomy of theories of well-being, one that divides theories in a more subtle and illuminating way. Second, I use this taxonomy to undermine some misconceptions that have made people reluctant to hold objective-list theories. Third, I provide a new objective-list theory and show that it captures a powerful motivation for the main competitor (...) theory of well-being (the desire-fulfilment theory). Fourth, I try to defuse the worry that objective-list theories are problematically arbitrary and show how the theory can and should be developed. (shrink)
Neuroimaging studies on moral decision-making have thus far largely focused on differences between moral judgments with opposing utilitarian (well-being maximizing) and deontological (duty-based) content. However, these studies have investigated moral dilemmas involving extreme situations, and did not control for two distinct dimensions of moral judgment: whether or not it is intuitive (immediately compelling to most people) and whether it is utilitarian or deontological in content. By contrasting dilemmas where utilitarian judgments are counterintuitive with dilemmas in which they are intuitive, we (...) were able to use functional magnetic resonance imaging to identify the neural correlates of intuitive and counterintuitive judgments across a range of moral situations. Irrespective of content (utilitarian/deontological), counterintuitive moral judgments were associated with greater difficulty and with activation in the rostral anterior cingulate cortex, suggesting that such judgments may involve emotional conflict; intuitive judgments were linked to activation in the visual and premotor cortex. In addition, we obtained evidence that neural differences in moral judgment in such dilemmas are largely due to whether they are intuitive and not, as previously assumed, to differences between utilitarian and deontological judgments. Our findings therefore do not support theories that have generally associated utilitarian and deontological judgments with distinct neural systems. (shrink)
Recent research has relied on trolley-type sacrificial moral dilemmas to study utilitarian versus nonutili- tarian modes of moral decision-making. This research has generated important insights into people’s attitudes toward instrumental harm—that is, the sacrifice of an individual to save a greater number. But this approach also has serious limitations. Most notably, it ignores the positive, altruistic core of utilitarianism, which is characterized by impartial concern for the well-being of everyone, whether near or far. Here, we develop, refine, and validate a (...) new scale—the Oxford Utilitarianism Scale—to dissociate individual differences in the ‘negative’ (permissive attitude toward instrumental harm) and ‘positive’ (impartial concern for the greater good) dimensions of utilitarian thinking as manifested in the general population. We show that these are two independent dimensions of proto-utilitarian tendencies in the lay population, each exhibiting a distinct psychological profile. Empathic concern, identification with the whole of humanity, and concern for future generations were positively associated with impartial beneficence but negatively associated with instrumental harm; and although instrumental harm was associated with subclinical psychopathy, impartial beneficence was associated with higher religiosity. Importantly, although these two dimensions were independent in the lay population, they were closely associated in a sample of moral philosophers. Acknowledging this dissociation between the instrumental harm and impartial beneficence components of utilitarian thinking in ordinary people can clarify existing debates about the nature of moral psychology and its relation to moral philosophy as well as generate fruitful avenues for further research. (shrink)
In 1907, Alfred Stieglitz took what was to become one of his signature photographs, The Steerage. Stieglitz stood at the rear of the ocean liner Kaiser Wilhelm II and photographed the decks, ﬁrst-class passengers above and steerage passengers below, carefully exposing the ﬁlm to their reﬂected light. Later, in the darkroom, Stieglitz developed this ﬁlm and made a number of prints from the resulting negative. The photograph is a familiar one, an enduring piece of social commentary, but what exactly is (...) The Steerage which Stieglitz has given us? It is clearer what The Steerage is not. It is distinct from each of its prints and from its negative. These may be dusty or torn without The Steerage being so, and any one of these could be destroyed without thereby destroying The Steerage itself. Nor is The Steerage the set of its prints. The set could not have had diﬀerent members, while The Steerage could have had more, fewer, or diﬀerent prints.1 Similar reasoning rules out the mereological sum of parts of its actual prints, for The Steerage’s prints might not have comprised just these parts. We are left with a puzzle, what sort of thing is a photograph? This puzzle is not unique to photography. Similar reasoning generates an analogous puzzle for any repeatable work of art. Novels, poems, plays, symphonies, songs, and the rest share an ontological predicament and create a 1 general puzzle concerning the ontological status of repeatable works of art. It is widely held that the puzzle has an equally general solution, one which I will argue fails for systematic reasons. Although my target here is the supposed solution to the general problem, photography will remain the central case under scrutiny. I oﬀer it as a model for our thinking about the wider class in order to reap the beneﬁts of thinking in terms of concrete cases. Although this risks a trade-oﬀ with the generality of my conclusions—there are important diﬀerences of detail between the cases—I hope it is clear that the considerations I appeal to in photography are not idiosyncratic but shared by the wider class.. (shrink)
This chapter is divided into three parts. First I outline what makes something an objective list theory of well-being. I then go on to look at the motivations for holding such a view before turning to objections to these theories of well-being.
The concept of well-being is one of the oldest and most important topics in philosophy and ethics, going back to ancient Greek philosophy and Aristotle. Following the boom in happiness studies in the last few years it has moved to centre stage, grabbing media headlines and the attention of scientists, psychologists and economists. Yet little is actually known about well-being and it is an idea often poorly articulated. The Routledge Handbook of Philosophy of Well-Being provides a comprehensive, outstanding guide and (...) reference source to the key topics and debates in this exciting subject. Comprising over forty chapters by a team of international contributors, the Handbook is divided into six parts: well-being in the history of philosophy current theories of well-being, including hedonism, and perfectionism examples of well-being and its opposites, including friendship and virtue and pain and death theoretical issues such as well-being and value, harm, identity and well-being and children well-being in moral and political philosophy well-being and related subjects including law, economics, and medicine. Essential reading for students and researchers in ethics and political philosophy, it will also be an invaluable resource for those in related disciplines such as psychology, politics and sociology. (shrink)
It is commonly claimed that reliance upon moral testimony is problematic in a way not common to reliance upon non-moral testimony. This chapter provides a new explanation of what the problem consists in—one that enjoys advantages over the most widely accepted explanation in the extant literature. The main theses of the chapter are as follows: that many forms of normative deference beyond the moral are problematic, that there is a common explanation of the problem with all of these forms of (...) deference—an explanation that is based on the connection between the relevant judgments and desire-like attitudes, and that this explanation is compatible with moral realism. (shrink)
The universe that surrounds us is vast, and we are so very small. When we reflect on the vastness of the universe, our humdrum cosmic location, and the inevitable future demise of humanity, our lives can seem utterly insignificant. Many philosophers assume that such worries about our significance reflect a banal metaethical confusion. They dismiss the very idea of cosmic significance. This, I argue, is a mistake. Worries about cosmic insignificance do not express metaethical worries about objectivity or nihilism, and (...) we can make good sense of the idea of cosmic significance and its absence. It is also possible to explain why the vastness of the universe can make us feel insignificant. This impression does turn out to be mistaken, but not for the reasons typically assumed. In fact, we might be of immense cosmic significance—though we cannot, at this point, tell whether this is the case. (shrink)
Whether God exists is a metaphysical question. But there is also a neglected evaluative question about God’s existence: Should we want God to exist? Very many, including many atheists and agnostics, appear to think we should. Theists claim that if God didn’t exist things would be far worse, and many atheists agree; they regret God’s inexistence. Some remarks by Thomas Nagel suggest an opposing view: that we should want God not to exist. I call this view anti-theism. I explain how (...) such view can be coherent, and why it might be correct. Anti-theism must be distinguished from the argument from evil or the denial of God’s goodness; it is a claim about the goodness of God’s existence. Anti-theists must claim that it’s a logical consequence of God’s existence that things are worse in certain respects. The problem is that God’s existence would also make things better in many ways. Given that God’s existence is likely to be impersonally better overall, anti-theists face a challenge similar to that facing nonconsequentialists. I explore two ways of meeting this challenge. (shrink)
A growing body of evidence suggests that cognition is embodied and grounded. Abstract concepts, though, remain a significant theoretical chal- lenge. A number of researchers have proposed that language makes an important contribution to our capacity to acquire and employ concepts, particularly abstract ones. In this essay, I critically examine this suggestion and ultimately defend a version of it. I argue that a successful account of how language augments cognition should emphasize its symbolic properties and incorporate a view of embodiment (...) that recognizes the flexible, multi- modal and task-related nature of action, emotion and perception systems. On this view, language is an ontogenetically disruptive cognitive technology that expands our conceptual reach. (shrink)
This paper is about first‐person thoughts—thoughts about oneself that are expressible through uses of first‐person pronouns. It is widely held that first‐person thoughts cannot be shared. My aim is to postpone rejection of the more natural view that such thoughts about oneself can be shared. I sketch an account on which such thoughts can be shared and indicate some ways in which deciding the fate of the account will depend upon further work.
In this article, we present a dialogical approach to empirical ethics, based upon hermeneutic ethics and responsive evaluation. Hermeneutic ethics regards experience as the concrete source of moral wisdom. In order to gain a good understanding of moral issues, concrete detailed experiences and perspectives need to be exchanged. Within hermeneutic ethics dialogue is seen as a vehicle for moral learning and developing normative conclusions. Dialogue stands for a specific view on moral epistemology and methodological criteria for moral inquiry. Responsive evaluation (...) involves a structured way of setting up dialogical learning processes, by eliciting stories of participants, exchanging experiences in (homogeneous and heterogeneous) groups and drawing normative conclusions for practice. By combining these traditions we develop both a theoretical and a practical approach to empirical ethics, in which ethical issues are addressed and shaped together with stakeholders in practice. Stakeholders' experiences are not only used as a source for reflection by the ethicist; stakeholders are involved in the process of reflection and analysis, which takes place in a dialogue between participants in practice, facilitated by the ethicist. This dialogical approach to empirical ethics may give rise to questions such as: What contribution does the ethicist make? What role does ethical theory play? What is the relationship between empirical research and ethical theory in the dialogical process? In this article, these questions will be addressed by reflecting upon a project in empirical ethics that was set up in a dialogical way. The aim of this project was to develop and implement normative guidelines with and within practice, in order to improve the practice concerning coercion and compulsion in psychiatry. (shrink)
Philosophers and social scientists will welcome this highly original discussion of Max Weber's analysis of the objectivity of social science. Guy Oakes traces the vital connection between Weber's methodology and the work of philosopher Heinrich Rickert, reconstructing Rickert's notoriously difficult concepts in order to isolate the important, and until now poorly understood, roots of problems in Weber's own work.Guy Oakes teaches social philosophy at Monmouth College and sociology at the New School for Social Research.
Neuroscience and psychology have recently turned their attention to the study of the subpersonal underpinnings of moral judgment. In this article we critically examine an influential strand of research originating in Greene's neuroimaging studies of ‘utilitarian’ and ‘non-utilitarian’ moral judgement. We argue that given that the explananda of this research are specific personal-level states—moral judgments with certain propositional contents—its methodology has to be sensitive to criteria for ascribing states with such contents to subjects. We argue that current research has often (...) failed to meet this constraint by failing to correctly ‘fix’ key aspects of moral judgment, criticism we support by detailed examples from the scientific literature. (shrink)
The possibility that nothing really matters can cause much anxiety, but what would it mean for that to be true? Since it couldn’t be bad that nothing matters, fearing nihilism makes little sense. However, the consequences of belief in nihilism will be far more dramatic than often thought. Many metaethicists assume that even if nothing matters, we should, and would, go on more or less as before. But if nihilism is true in an unqualified way, it can’t be the case (...) that we should go on as before. And given some plausible assumptions about our psychology, it’s also unlikely that we would go on as before: belief in nihilism will lead to loss of evaluative belief, and that will lead to loss or deflation of our corresponding subjective concerns. Now if nothing matters, then this consequence also wouldn’t matter. But this consequence will be extremely harmful if we believe in nihilism but things do matter, an asymmetry that gives us, in Pascalian fashion, pragmatic reasons not to believe in nihilism, and reasons not to try to find out whether it is really true. (shrink)
While all agree that score compliance in performance is valuable, the source of this value is unclear. Questions about what authenticity requires crowd out questions about our reasons to be compliant in the first place, perhaps because they seem trivial or uninteresting. I argue that such reasons cannot be understood as ordinary aesthetic, instrumental, epistemic, or moral reasons. Instead, we treat considerations of score compliance as having a kind of final value, one which requires further explanation. Taking as a model (...) the Humean account of fidelity as an artificial virtue, I sketch a practice-theoretic account of the nature and source of such reasons, one on which we can say that they are, after all, aesthetic, but only indirectly so. (shrink)
This article draws attention to several common mistakes in thinking about biomedical enhancement, mistakes that are made even by some supporters of enhancement. We illustrate these mistakes by examining objections that John Harris has recently raised against the use of pharmacological interventions to directly modulate moral decision-making. We then apply these lessons to other influential figures in the debate about enhancement. One upshot of our argument is that many considerations presented as powerful objections to enhancement are really strong considerations in (...) favour of biomedical enhancement, just in a different direction. Another upshot is that it is unfortunate that much of the current debate focuses on interventions that will radically transform normal human capacities. Such interventions are unlikely to be available in the near future, and may not even be feasible. But our argument shows that the enhancement project can still have a radical impact on human life even if biomedical enhancement operated entirely within the normal human range. (shrink)
Recent evidence from cognitive neuroscience suggests that certain cognitive processes employ perceptual representations. Inspired by this evidence, a few researchers have proposed that cognition is inherently perceptual. They have developed an innovative theoretical approach that rests on the notion of perceptual simulation and marshaled several general arguments supporting the centrality of perceptual representations to concepts. In this article, I identify a number of weaknesses in these arguments and defend a multiple semantic code approach that posits both perceptual and non-perceptual representations.
According to Joshua Greene’s influential dual process model of moral judgment, different modes of processing are associated with distinct moral outputs: automatic processing with deontological judgment, and controlled processing with utilitarian judgment. This paper aims to clarify and assess Greene’s model. I argue that the proposed tie between process and content is based on a misinterpretation of the evidence, and that the supposed evidence for controlled processing in utilitarian judgment is actually likely to reflect generic deliberation which, ironically, is incompatible (...) with a utilitarian outlook. This alternative proposal is further supported by the results of a recent neuroimaging study we have done. (shrink)
This article traces a growing interest among epistemologists in the intellectuals of epistemic virtues. These are cognitive dispositions exercised in the formation of beliefs. Attempts to give intellectual virtues a central normative and/or explanatory role in epistemology occur together with renewed interest in the ethics/epistemology analogy, and in the role of intellectual virtue in Aristotle's epistemology. The central distinction drawn here is between two opposed forms of virtue epistemology, virtue reliabilism and virtue responsibilism. The article develops the shared and distinctive (...) claims made by contemporary proponents of each form, in their respective treatments of knowledge and justification. (shrink)
Is linguistic understanding a form of knowledge? I clarify the question and then consider two natural forms a positive answer might take. I argue that, although some recent arguments fail to decide the issue, neither positive answer should be accepted. The aim is not yet to foreclose on the view that linguistic understanding is a form of knowledge, but to develop desiderata on a satisfactory successor to the two natural views rejected here.
Philosophers have long theorized about which things make people’s lives go well, and why, and the extent to which morality and self-interest can be reconciled. Yet little time has been spent on meta-prudential questions, questions about prudential discourse. This is surprising given that prudence is, prima facie, a normative form of discourse and, as such, cries out for further investigation. Chapter 4 takes up two major meta-prudential questions. It first examines whether there is a set of prudential reasons, generated by (...) evaluative prudential properties, and defends the view that evaluative well-being facts generate agent-relative reasons for the relevant agent. It also investigates whether prudential discourse is normative. It is proposed that prudential discourse is normative by arguing that prudential judgements are normative judgements. The case for this is presented by analogy with moral discourse by showing that the features of moral judgements that metaethicists appeal to when articulating, explaining, and justifying the claim that moral judgements are normative are also possessed by prudential judgements. Various objections to the analogy are also considered. (shrink)
The Theory of Planned Behavior predicts that a combination of attitudes, perceived norms, and perceived behavioral control predict intentions, and that intentions ultimately predict behavior. Previous studies have found that the TPB can predict students’ engagement in plagiarism. Furthermore, the General Theory of Crime suggests that self-control is particularly important in predicting engagement in unethical behavior such as plagiarism. In Study 1, we incorporated self-control in a TPB model and tested whether norms, attitudes, and self-control predicted intention to plagiarize and (...) plagiarism behavior. The best statistical fit for the path-analytic model was achieved when a direct path from self-control to plagiarism engagement was specified. In Study 2, we added a measure of perceived behavioral control and split the measurement of norms into descriptive and injunctive components. This study found that both self-control and perceived-behavioral control additively contributed to the prediction of plagiarism and the path-analytic model achieved its best fit when direct paths from perceived norms to plagiarism behavior were specified. These studies suggest that setting strong anti-plagiarism norms, such as by the use of honor codes, and seeking to enhance students’ self-control may reduce engagement in plagiarism. (shrink)
A great deal of research has focused on the question of whether or not concepts are embodied as a rule. Supporters of embodiment have pointed to studies that implicate affective and sensorimotor systems in cognitive tasks, while critics of embodiment have offered nonembodied explanations of these results and pointed to studies that implicate amodal systems. Abstract concepts have tended to be viewed as an important test case in this polemical debate. This essay argues that we need to move beyond a (...) pretheoretical notion of abstraction. Against the background of current research and theory, abstract concepts do not pose a single, unified problem for embodied cognition but, instead, three distinct problems: the problem of generalization, the problem of flexibility, and the problem of disembodiment. Identifying these problems provides a conceptual framework for critically evaluating, and perhaps improving upon, recent theoretical proposals. (shrink)
The moral error theorist claims that moral discourse is irredeemably in error because it is committed to the existence of properties that do not exist. A common response has been to postulate ‘companions in guilt’—forms of discourse that seem safe from error despite sharing the putatively problematic features of moral discourse. The most developed instance of this pairs moral discourse with epistemic discourse. In this paper, I present a new, prudential, companions-in-guilt argument and argue for its superiority over the epistemic (...) alternative. (shrink)
Neuroimaging studies of brain-damaged patients diagnosed as in the vegetative state suggest that the patients might be conscious. This might seem to raise no new ethical questions given that in related disputes both sides agree that evidence for consciousness gives strong reason to preserve life. We question this assumption. We clarify the widely held but obscure principle that consciousness is morally significant. It is hard to apply this principle to difficult cases given that philosophers of mind distinguish between a range (...) of notions of consciousness and that is unclear which of these is assumed by the principle. We suggest that the morally relevant notion is that of phenomenal consciousness and then use our analysis to interpret cases of brain damage. We argue that enjoyment of consciousness might actually give stronger moral reasons not to preserve a patient's life and, indeed, that these might be stronger when patients retain significant cognitive function. (shrink)
THE STRUCTURE OF THIS PAPER IS AS FOLLOWS. I begin §1 by dealing with preliminary issues such as the different relations expressed by the “good for” locution. I then (§2) outline the Locative Analysis of good for and explain its main elements before moving on to (§3) outlining and discussing the positive features of the view. In the subsequent sections I show how the Locative Analysis can respond to objections from, or inspired by, Sumner (§4-5), Regan (§6), and Schroeder and (...) Feldman (§7). I then (§8) reply to an imagined objector who claims that the Locative Analysis generates implausible results with respect to punishment, virtue and agent-centered duties. (shrink)
Many believe that because we’re so small, we must be utterly insignificant on the cosmic scale. But whether this is so depends on what it takes to be important. On one view, what matters for importance is the difference to value that something makes. On this view, what determines our cosmic importance isn’t our size, but what else of value is out there. But a rival view also seems plausible: that importance requires sufficient causal impact on the relevant scale; since (...) we have no such impact on the grand scale, that would entail our cosmic insignificance. I argue that despite appearances, causal impact is neither necessary nor sufficient for importance. All that matters is impact on value. Since parts can have non-causal impact on the value of the wholes that contain them, this means that we might have great impact on the grandest scale without ever leaving our little planet. (shrink)