This book presents an exploration of the idea of the common or social good, extended so that alternatives with different populations can be ranked. The approach is, in the main, welfarist, basing rankings on the well-being, broadly conceived, of those who are alive. The axiomatic method is employed, and topics investigated include: the measurement of individual well-being, social attitudes toward inequality of well-being, the main classes of population principles, principles that provide incomplete rankings, principles that rank uncertain alternatives, best choices (...) from feasible sets, and applications. The chapters are divided, with mathematical arguments confined to the second part. The first part is intended to make the arguments accessible to a more general readership. Although the book can be read as a defense of the critical-level generalized-utilitarian class of principles, comprehensive examinations of other classes are included. (shrink)
Bentham's dictum, ‘everybody to count for one, nobody for more than one’, is frequently noted but seldom discussed by commentators. Perhaps it is not thought contentious or exciting because interpreted as merely reminding the utilitarian legislator to make certain that each person's interests are included, that no one is missed, in working the felicific calculus. Since no interests are secure against the maximizing directive of the utility principle, which allows them to be overridden or sacrificed, the dictum is not usually (...) taken to be asserting fundamental rights that afford individuals normative protection against the actions of others or against legislative policies deemed socially expedient. Such non-conventional moral rights seem denied a place in a utilitarian theory so long as the maximization of aggregate happiness remains the ultimate standard and moral goal. (shrink)
The book is an extended study of the problem of consciousness. After setting up the problem, I argue that reductive explanation of consciousness is impossible , and that if one takes consciousness seriously, one has to go beyond a strict materialist framework. In the second half of the book, I move toward a positive theory of consciousness with fundamental laws linking the physical and the experiential in a systematic way. Finally, I use the ideas and arguments developed earlier to defend (...) a form of strong artificial intelligence and to analyze some problems in the foundations of quantum mechanics. (shrink)
Inspired by Rudolf Carnap's Der Logische Aufbau Der Welt, David J. Chalmers argues that the world can be constructed from a few basic elements. He develops a scrutability thesis saying that all truths about the world can be derived from basic truths and ideal reasoning. This thesis leads to many philosophical consequences: a broadly Fregean approach to meaning, an internalist approach to the contents of thought, and a reply to W. V. Quine's arguments against the analytic and the a (...) priori. Chalmers also uses scrutability to analyze the unity of science, to defend a conceptual approach to metaphysics, and to mount a structuralist response to skepticism. Based on the 2010 John Locke lectures, Constructing the World opens up debate on central philosophical issues involving language, consciousness, knowledge, and reality. This major work by a leading philosopher will appeal to philosophers in all areas. This entry contains uncorrected proofs of front matter, chapter 1, and first excursus. (shrink)
A collection of 37 essays surveying the state of the art on metaphysical ground. -/- Essay authors are: Fatema Amijee, Ricki Bliss, Amanda Bryant, Margaret Cameron, Phil Corkum, Fabrice Correia, Louis deRosset, Scott Dixon, Tom Donaldson, Nina Emery, Kit Fine, Martin Glazier, Kathrin Koslicki, David Mark Kovacs, Stephan Krämer, Stephanie Leary, Stephan Leuenberger, Jon Litland, Marko Malink, Michaela McSweeney, Kevin Mulligan, Alyssa Ney, Asya Passinsky, Francesca Poggiolesi, Kevin Richardson, Stefan Roski, Noel Saenz, Benjamin Schnieder, Erica Shumener, Alexander Skiles, (...) Olla Solomyak, Tuomas Tahko, Naomi Thompson, Kelly Trogdon, Jennifer Wang, Tobias Wilsch, and Justin Zylstra. (shrink)
A leading philosopher takes a mind-bending journey through virtual worlds, illuminating the nature of reality and our place within it. Virtual reality is genuine reality; that's the central thesis of Reality+. In a highly original work of "technophilosophy," David J. Chalmers gives a compelling analysis of our technological future. He argues that virtual worlds are not second-class worlds, and that we can live a meaningful life in virtual reality. We may even be in a virtual world already. Along the (...) way, Chalmers conducts a grand tour of big ideas in philosophy and science. He uses virtual reality technology to offer a new perspective on long-established philosophical questions. How do we know that there's an external world? Is there a god? What is the nature of reality? What's the relation between mind and body? How can we lead a good life? All of these questions are illuminated or transformed by Chalmers' mind-bending analysis. Studded with illustrations that bring philosophical issues to life, Reality+ is a major statement that will shape discussion of philosophy, science, and technology for years to come. (shrink)
There is a long tradition in philosophy of using a priori methods to draw conclusions about what is possible and what is necessary, and often in turn to draw conclusions about matters of substantive metaphysics. Arguments like this typically have three steps: first an epistemic claim , from there to a modal claim , and from there to a metaphysical claim.
Does consciousness collapse the quantum wave function? This idea was taken seriously by John von Neumann and Eugene Wigner but is now widely dismissed. We develop the idea by combining a mathematical theory of consciousness (integrated information theory) with an account of quantum collapse dynamics (continuous spontaneous localization). Simple versions of the theory are falsified by the quantum Zeno effect, but more complex versions remain compatible with empirical evidence. In principle, versions of the theory can be tested by experiments with (...) quantum computers. The upshot is not that consciousness-collapse interpretations are clearly correct, but that there is a research program here worth exploring. (shrink)
One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question" -- consideration of the moral status of nonhuman animals. In this book, David Gunkel takes up the "machine question": whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. The machine question poses a (...) fundamental challenge to moral thinking, questioning the traditional philosophical conceptualization of technology as a tool or instrument to be used by human agents. Gunkel begins by addressing the question of machine moral agency: whether a machine might be considered a legitimate moral agent that could be held responsible for decisions and actions. He then approaches the machine question from the other side, considering whether a machine might be a moral patient due legitimate moral consideration. Finally, Gunkel considers some recent innovations in moral philosophy and critical theory that complicate the machine question, deconstructing the binary agent--patient opposition itself. Technological advances may prompt us to wonder if the science fiction of computers and robots whose actions affect their human companions could become science fact. Gunkel's argument promises to influence future considerations of ethics, ourselves, and the other entities who inhabit this world. (shrink)
Is conceptual analysis required for reductive explanation? If there is no a priori entailment from microphysical truths to phenomenal truths, does reductive explanation of the phenomenal fail? We say yes . Ned Block and Robert Stalnaker say no.
The philosophical interest of verbal disputes is twofold. First, they play a key role in philosophical method. Many philosophical disagreements are at least partly verbal, and almost every philosophical dispute has been diagnosed as verbal at some point. Here we can see the diagnosis of verbal disputes as a tool for philosophical progress. Second, they are interesting as a subject matter for first-order philosophy. Reflection on the existence and nature of verbal disputes can reveal something about the nature of concepts, (...) language, and meaning. In this article I first characterize verbal disputes, spell out a method for isolating and resolving them, and draw out conclusions for philosophical methodology. I then use the framework to draw out consequences in first-order philosophy. In particular, I argue that the analysis of verbal disputes can be used to support the existence of a distinctive sort of primitive concept and that it can be used to reconstruct a version of an analytic/synthetic distinction, where both are characterized in dialectical terms alone. (shrink)
Consciousness and intentionality are perhaps the two central phenomena in the philosophy of mind. Human beings are conscious beings: there is something it is like to be us. Human beings are intentional beings: we represent what is going on in the world.Correspondingly, our specific mental states, such as perceptions and thoughts, very often have a phenomenal character: there is something it is like to be in them. And these mental states very often have intentional content: they serve to represent the (...) world. On the face of it, consciousness and intentionality are intimately connected. Our most important conscious mental states are intentional states: conscious experiences often inform us about the state of the world. And our most important intentional mental states are conscious states: there is often something it is like to represent the external world. It is natural to think that a satisfactory account of consciousness must respect its intentional structure, and that a satisfactory account of intentionality must respect its phenomenological character.With this in mind, it is surprising that in the last few decades, the philosophical study of consciousness and intentionality has often proceeded in two independent streams. This wasnot always the case. In the work of philosophers from Descartes and Locke to Brentano and Husserl, consciousness and intentionality were typically analyzed in a single package. But in the second half of the twentieth century, the dominant tendency was to concentrate on onetopic or the other, and to offer quite separate analyses of the two. On this approach, the connections between consciousness and intentionality receded into the background.In the last few years, this has begun to change. The interface between consciousness and intentionality has received increasing attention on a number of fronts. This attention has focused on such topics as the representational content of perceptual experience, the higherorder representation of conscious states, and the phenomenology of thinking. Two distinct philosophical groups have begun to emerge. One group focuses on ways in which consciousness might be grounded in intentionality. The other group focuses on ways in which intentionality might be grounded in consciousness. (shrink)
Consciousness fits uneasily into our conception of the natural world. On the most common conception of nature, the natural world is the physical world. But on the most common conception of consciousness, it is not easy to see how it could be part of the physical world. So it seems that to find a place for consciousness within the natural order, we must either revise our conception of consciousness, or revise our conception of nature. In twentieth-century philosophy, this dilemma is (...) posed most acutely in C. D. Broad’s The Mind and its Place in Nature . The phenomena of mind, for Broad, are the phenomena of consciousness. The central problem is that of locating mind with respect to the physical world. Broad’s exhaustive discussion of the problem culminates in a taxonomy of seventeen different views of the mental-physical relation.1 On Broad’s taxonomy, a view might see the mental as nonexistent , as reducible, as emergent, or as a basic property of a substance . The physical might be seen in one of the same four ways. So a four-by-four matrix of views results. At the end, three views are left standing: those on which mentality is an emergent characteristic of either a physical substance or a neutral substance, where in the latter case, the physical might be either emergent or delusive. (shrink)
Why is two-dimensional semantics important? One can think of it as the most recent act in a drama involving three of the central concepts of philosophy: meaning, reason, and modality. First, Kant linked reason and modality, by suggesting that what is necessary is knowable a priori, and vice versa. Second, Frege linked reason and meaning, by proposing an aspect of meaning (sense) that is constitutively tied to cognitive signi?cance. Third, Carnap linked meaning and modality, by proposing an aspect of meaning (...) (intension) that is constitutively tied to possibility and necessity. (shrink)
In the Garden of Eden, we had unmediated contact with the world. We were directly acquainted with objects in the world and with their properties. Objects were simply presented to us without causal mediation, and properties were revealed to us in their true intrinsic glory.
In a friendly interdisciplinary debate, we interrogate from several vantage points the question of “personhood” in light of contemporary and near-future forms of social AI. David J. Gunkel approaches the matter from a philosophical and legal standpoint, while Jordan Wales offers reflections theological and psychological. Attending to metaphysical, moral, social, and legal understandings of personhood, we ask about the position of apparently personal artificial intelligences in our society and individual lives. Re-examining the “person” and questioning prominent construals of that (...) category, we hope to open new views upon urgent and much-discussed questions that, quite soon, may confront us in our daily lives. (shrink)
What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...) surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work.. (shrink)
I argue that virtual reality is a sort of genuine reality. In particular, I argue for virtual digitalism, on which virtual objects are real digital objects, and against virtual fictionalism, on which virtual objects are fictional objects. I also argue that perception in virtual reality need not be illusory, and that life in virtual worlds can have roughly the same sort of value as life in non-virtual worlds.
The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...) It then considers three instances where recent innovations in robotics challenge this standard operating procedure by opening gaps in the usual way of assigning responsibility. The innovations considered in this section include: autonomous technology, machine learning, and social robots. The essay concludes by evaluating the three different responses—instrumentalism 2.0, machine ethics, and hybrid responsibility—that have been made in face of these difficulties in an effort to map out the opportunities and challenges of and for responsible robotics. (shrink)
The term ‘emergence’ often causesconfusion in science and philosophy, as it is used to express at leasttwo quite different concepts. We can label these concepts _strong_ _emergence_ and _weak emergence_. Both of these concepts are important, but it is vital to keep them separate.
A natural way to think about epistemic possibility is as follows. When it is epistemically possible (for a subject) that p, there is an epistemically possible scenario (for that subject) in which p. The epistemic scenarios together constitute epistemic space. It is surprisingly difficult to make the intuitive picture precise. What sort of possibilities are we dealing with here? In particular, what is a scenario? And what is the relationship between scenarios and items of knowledge and belief? This chapter tries (...) to make sense of epistemic space. It explores different ways of making sense of scenarios and of their relationship to thought and language. It discusses some issues that arise and outlines some applications to the analysis of the content of thought and the meaning of language. (shrink)
Was human nature designed by natural selection in the Pleistocene epoch? The dominant view in evolutionary psychology holds that it was -- that our psychological adaptations were designed tens of thousands of years ago to solve problems faced by our hunter-gatherer ancestors. In this provocative and lively book, David Buller examines in detail the major claims of evolutionary psychology -- the paradigm popularized by Steven Pinker in The Blank Slate and by David Buss in The Evolution of Desire (...) -- and rejects them all. This does not mean that we cannot apply evolutionary theory to human psychology, says Buller, but that the conventional wisdom in evolutionary psychology is misguided.Evolutionary psychology employs a kind of reverse engineering to explain the evolved design of the mind, figuring out the adaptive problems our ancestors faced and then inferring the psychological adaptations that evolved to solve them. In the carefully argued central chapters of Adapting Minds, Buller scrutinizes several of evolutionary psychology's most highly publicized "discoveries," including "discriminative parental solicitude". Drawing on a wide range of empirical research, including his own large-scale study of child abuse, he shows that none is actually supported by the evidence.Buller argues that our minds are not adapted to the Pleistocene, but, like the immune system, are continually adapting, over both evolutionary time and individual lifetimes. We must move beyond the reigning orthodoxy of evolutionary psychology to reach an accurate understanding of how human psychology is influenced by evolution. When we do, Buller claims, we will abandon not only the quest for human nature but the very idea of human nature itself. (shrink)
When I say ‘Hesperus is Phosphorus’, I seem to express a proposition. And when I say ‘Joan believes that Hesperus is Phosphorus’, I seem to ascribe to Joan an attitude to the same proposition. But what are propositions? And what is involved in ascribing propositional attitudes?
This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...) Second, capitalizing on this verbal distinction, it is possible to identify four modalities concerning social robots and the question of rights. The second section will identify and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, we will conclude by proposing another alternative, a way of thinking otherwise that effectively challenges the existing rules of the game and provides for other ways of theorizing moral standing that can scale to the unique challenges and opportunities that are confronted in the face of social robots. (shrink)
This appeared in Philosophy and Phenomenological Research 59:473-93, as a response to four papers in a symposium on my book The Conscious Mind . Most of it should be comprehensible without having read the papers in question. This paper is for an audience of philosophers and so is relatively technical. It will probably also help to have read some of the book. The papers I’m responding to are: Chris Hill & Brian McLaughlin, There are fewer things in reality than are (...) dreamt of in Chalmers’ philosophy Brian Loar, David Chalmers’ The Conscious Mind Sydney Shoemaker, On David Chalmers’ The Conscious Mind Stephen Yablo, Concepts and consciousness Contents. (shrink)
Hilary Putnam has argued that computational functionalism cannot serve as a foundation for the study of the mind, as every ordinary open physical system implements every finite-state automaton. I argue that Putnam's argument fails, but that it points out the need for a better understanding of the bridge between the theory of computation and the theory of physical systems: the relation of implementation. It also raises questions about the class of automata that can serve as a basis for understanding the (...) mind. I develop an account of implementation, linked to an appropriate class of automata, such that the requirement that a system implement a given automaton places a very strong constraint on the system. This clears the way for computation to play a central role in the analysis of mind. (shrink)
Two-dimensional approaches to semantics, broadly understood, recognize two "dimensions" of the meaning or content of linguistic items. On these approaches, expressions and their utterances are associated with two different sorts of semantic values, which play different explanatory roles. Typically, one semantic value is associated with reference and ordinary truth-conditions, while the other is associated with the way that reference and truth-conditions depend on the external world. The second sort of semantic value is often held to play a distinctive role in (...) analyzing matters of cognitive significance and/or context-dependence. (shrink)
The search for neural correlates of consciousness (or NCCs) is arguably the cornerstone in the recent resurgence of the science of consciousness. The search poses many difficult empirical problems, but it seems to be tractable in principle, and some ingenious studies in recent years have led to considerable progress. A number of proposals have been put forward concerning the nature and location of neural correlates of consciousness. A few of these include.
Philosophers and cognitive scientists address the relationships among the senses and the connections between conscious experiences that form unified wholes. In this volume, cognitive scientists and philosophers examine two closely related aspects of mind and mental functioning: the relationships among the various senses and the links that connect different conscious experiences to form unified wholes. The contributors address a range of questions concerning how information from one sense influences the processing of information from the other senses and how unified states (...) of consciousness emerge from the bonds that tie conscious experiences together. Sensory Integration and the Unity of Consciousness is the first book to address both of these topics, integrating scientific and philosophical concerns. A flood of recent work in both philosophy and perception science has challenged traditional conceptions of the sensory systems as operating in isolation. Contributors to the volume consider the ways in which perceptual contact with the world is or may be “multisensory,” discussing such subjects as the modeling of multisensory integration and philosophical aspects of sensory modalities. Recent years have seen a similar surge of interest in unity of consciousness. Contributors explore a range of questions on this topic, including the nature of that unity, the degree to which conscious experiences are unified, and the relationship between unified consciousness and the self. Contributors Tim Bayne, David J. Bennett, Berit Brogaard, Barry Dainton, Ophelia Deroy, Frederique de Vignemont, Marc Ernst, Richard Held, Christopher S. Hill, Geoffrey Lee, Kristan Marlow, Farid Masrour, Jennifer Matey, Casey O'Callaghan, Cesare V. Parise, Kevin Rice, Elizabeth Schechter, Pawan Sinha, Julia Trommershaeuser, Loes C. J. van Dam, Jonathan Vogel, James Van Cleve, Robert Van Gulick, Jonas Wulff. (shrink)
The Matrix presents a version of an old philosophical fable: the brain in a vat. A disembodied brain is floating in a vat, inside a scientist’s laboratory. The scientist has arranged that the brain will be stimulated with the same sort of inputs that a normal embodied brain receives. To do this, the brain is connected to a giant computer simulation of a world. The simulation determines which inputs the brain receives. When the brain produces outputs, these are fed back (...) into the simulation. The internal state of the brain is just like that of a normal brain, despite the fact that it lacks a body. From the brain’s point of view, things seem very much as they seem to you and me. (shrink)
Where does the mind stop and the rest of the world begin? The question invites two standard replies. Some accept the demarcations of skin and skull, and say that what is outside the body is outside the mind. Others are impressed by arguments suggesting that the meaning of our words "just ain't in the head", and hold that this externalism about meaning carries over into an externalism about mind. We propose to pursue a third position. We advocate a very different (...) sort of externalism: an _active externalism_, based on the active role of the environment in driving cognitive processes. (shrink)
In Philosophy Without Intuitions, Herman Cappelen focuses on the metaphilosophical thesis he calls Centrality: contemporary analytic philosophers rely on intuitions as evidence for philosophical theories. Using linguistic and textual analysis, he argues that Centrality is false. He also suggests that because most philosophers accept Centrality, they have mistaken beliefs about their own methods.To put my own views on the table: I do not have a large theoretical stake in the status of intuitions, but unreflectively I find it fairly obvious that (...) many philosophers, including myself, appeal to intuitions. Cappelen’s arguments make a provocative challenge to this unreflective background conception. So it is interesting to work through the arguments to see what they might and might not show.In what follows I aim to articulate a minimal notion of intuition that captures something of the core everyday philosophical usage of the term, and that captures the sense .. (shrink)
Cartesian arguments for global skepticism about the external world start from the premise that we cannot know that we are not in a Cartesian scenario such as an evil-demon scenario, and infer that because most of our empirical beliefs are false in such a scenario, these beliefs do not constitute knowledge. Veridicalist responses to global skepticism respond that arguments fail because in Cartesian scenarios, many or most of our empirical beliefs are true. Some veridicalist responses have been motivated using verificationism, (...) externalism, and coherentism. I argue that a more powerful veridicalist response to global skepticism can be motivated by structuralism, on which physical entities are understood as those that play a certain structural role. I develop the structuralist response and address objections. (shrink)
Introduction: making the invisible visible -- The nobility of the material -- Research at war -- The guilded age of research -- The doctor as whistle-blower -- New rules for the laboratory -- Bedside ethics -- The doctor as stranger -- Life through death -- Commissioning ethics -- No one to trust -- New rules for the bedside -- Epilogue: The price of success.
The objects of credence are the entities to which credences are assigned for the purposes of a successful theory of credence. I use cases akin to Frege's puzzle to argue against referentialism about credence : the view that objects of credence are determined by the objects and properties at which one's credence is directed. I go on to develop a non-referential account of the objects of credence in terms of sets of epistemically possible scenarios.
Graeme Forbes (2011) raises some problems for two-dimensional semantic theories. The problems concern nested environments: linguistic environments where sentences are nested under both modal and epistemic operators. Closely related problems involving nested environments have been raised by Scott Soames (2005) and Josh Dever (2007). Soames goes so far as to say that nested environments pose the “chief technical problem” for strong two-dimensionalism. We call the problem of handling nested environments within two-dimensional semantics “the nesting problem”. We show that the two-dimensional (...) semantics for attitude ascriptions developed in Chalmers (2011a) has no trouble accommodating certain forms of the nesting problem that involve factive verbs such as “know” or “establish”. A certain form of the nesting problem involving apriority and necessity operators does raise an interesting puzzle, but we show how a generalized version of the nesting problem arises independently of two-dimensional semantics—it arises, in fact, for anyone who accepts the contingent a priori. We, then, provide a two-dimensional treatment of the apriority operator that fits the two-dimensional treatment of attitude verbs and apply it to the generalized nesting problem. We conclude that two-dimensionalism is not seriously threatened by cases involving the nesting of epistemic and modal operators. (shrink)
This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...) considered necessary and sufficient to be a moral agent or patient but because the characterization of agency and patiency already fail to accommodate others. The third and fourth parts respond to this problem by considering two recent alternatives—the all-encompassing ontocentric approach of Luciano Floridi’s information ethics and Emmanuel Levinas’s eccentric ethics of otherness. Both alternatives, despite considerable promise to reconfigure the scope of moral thinking by addressing previously excluded others, like the machine, also fail but for other reasons. Consequently, the essay concludes not by accommodating the alterity of the machine to the requirements of moral philosophy but by questioning the systemic limitations of moral reasoning, requiring not just an extension of rights to machines, but a thorough examination of the way moral standing has been configured in the first place. (shrink)
The ethical behavior of marketing managers was examined by analyzing their responses to a series of different types of ethical dilemmas presented in vignette form. The ethical dilemmas addressed dealt with the issues of (1) coercion and control, (2) conflict of interest, (3) the physical environment, (4) paternalism, and (5) personal integrity. Responses were analyzed to discover whether managers' behavior varied by type of issue faced or whether there is some continuity to ethical behavior which transcends the type of ethical (...) problem addressed. (shrink)
In this book, David Stump traces alternative conceptions of the a priori in the philosophy of science and defends a unique position in the current debates over conceptual change and the constitutive elements in science. Stump emphasizes the unique epistemological status of the constitutive elements of scientific theories, constitutive elements being the necessary preconditions that must be assumed in order to conduct a particular scientific inquiry. These constitutive elements, such as logic, mathematics, and even some fundamental laws of nature, (...) were once taken to be a priori knowledge but can change, thus leading to a dynamic or relative a priori. Stump critically examines developments in thinking about constitutive elements in science as a priori knowledge, from Kant’s fixed and absolute a priori to Quine’s holistic empiricism. By examining the relationship between conceptual change and the epistemological status of constitutive elements in science, Stump puts forward an argument that scientific revolutions can be explained and relativism can be avoided without resorting to universals or absolutes. (shrink)
*[[This paper is largely based on material in other papers. The first three sections and the appendix are drawn with minor modifications from Chalmers 2002c . The main ideas of the last three sections are drawn from Chalmers 1996, 1999, and 2002a, although with considerable revision and elaboration. ]].