Addiction Trajectories is a collection of anthropological essays that brings a refreshingly human perspective to the scientific pursuit of addiction. This book encourages the reader to step back from the details, giving voice to the experiences of the drug user as they grapple to come to terms with their condition and the efforts of the treatment community. At the same time, the book provides insight into the machinations of the treatment community struggling to understand the scope of their task and (...) to respond to the shifting intellectual landscapes of licit and illicit drugs. The addiction trajectories framework is constructed around three themes. Theme one concerns the changing categories and concepts of addiction through time and across disciplinary boundaries. Theme two represents changes in therapeutic concepts of addiction across cultures and organisations. Theme three represents the experiences of the addicted individual. Conceptualising trajectories captures the shifting natur .. (shrink)
Bentham's dictum, ‘everybody to count for one, nobody for more than one’, is frequently noted but seldom discussed by commentators. Perhaps it is not thought contentious or exciting because interpreted as merely reminding the utilitarian legislator to make certain that each person's interests are included, that no one is missed, in working the felicific calculus. Since no interests are secure against the maximizing directive of the utility principle, which allows them to be overridden or sacrificed, the dictum is not usually (...) taken to be asserting fundamental rights that afford individuals normative protection against the actions of others or against legislative policies deemed socially expedient. Such non-conventional moral rights seem denied a place in a utilitarian theory so long as the maximization of aggregate happiness remains the ultimate standard and moral goal. (shrink)
In editing Plato's Sophist for the new OCT vol. I, ed. E. A. Duke, W. F. Hicken, W. S. M. Nicoll, D. B. Robinson, and J. C. G. Strachan , there was less chance of giving novel information about W = Vind. Supp. Gr. 7 for this dialogue than for others in the volume, since Apelt's edition of 1897 was used by Burnet in 1900 and was based on Apelt's own collation of W. The result was better than the (...) somewhat confused information printed by Burnet, even in his 1905 reprint, for W for the other dialogues in vol. I. But in the Sophist as elsewhere in vol. I collations largely due to Dr W. S. M. Nicoll added new facts about all of BDTWP and their correctors, and the search for testimonia largely carried out by Dr E. A. Duke added new facts in that area. A reviewer counts 66 changes in our text of the Sophist, which may perhaps be a slight over-estimate. Classification of changes as substantive or as falling into different groups is sometimes difficult, but I think plausible figures are as follows. We have in 25 places made a different choice of readings from the primary mss. and testimonia. We have printed conjectures where Burnet kept a ms. reading in 17 places, but conversely we have reverted to a ms. reading where Burnet had a conjecture in 8 places. We have printed alternative conjectures to conjectures adopted by Burnet in 6 places. So we have actually departed from the primary sources on at most 9 more occasions overall than Burnet. What must be noted is that Burnet had already printed conjectures on something like 87 occasions , so our percentage addition to Burnet's departures from the primary sources is modest. Moreover Burnet printed about 25 readings from testimonia; we have followed him in 20 or so of these cases, and this in turn implies that the primary mss. are in error at these further 20 places. It needs to be underlined that though Burnet undoubtedly deserved to be regarded as a safe and cautious editor, nevertheless he departed from the primary mss. on average about twice per Stephanus page in this dialogue. Sometimes, of course, testimonia showed him right to do this, but testimonia cover only a quite small part of this dialogue. Otherwise Burnet accepted almost 90 conjectures. For the Politicus the figures are fairly similar; Burnet accepted 22 Byzantine conjectures and 35–40 more modern ones. The new OCT there adds 15 or so more conjectures. (shrink)
David J. Kalupahana's Buddhist Philosophy: A Historical Analysis has, since its original publication in 1976, offered an unequaled introduction to the philosophical principles and historical development of Buddhism. Now, representing the culmination of Dr. Kalupahana's thirty years of scholarly research and reflection, A History of Buddhist Philosophy builds upon and surpasses that earlier work, providing a completely reconstructed, detailed analysis of both early and later Buddhism.
Inspired by Rudolf Carnap's Der Logische Aufbau Der Welt, David J. Chalmers argues that the world can be constructed from a few basic elements. He develops a scrutability thesis saying that all truths about the world can be derived from basic truths and ideal reasoning. This thesis leads to many philosophical consequences: a broadly Fregean approach to meaning, an internalist approach to the contents of thought, and a reply to W. V. Quine's arguments against the analytic and the a (...) priori. Chalmers also uses scrutability to analyze the unity of science, to defend a conceptual approach to metaphysics, and to mount a structuralist response to skepticism. Based on the 2010 John Locke lectures, Constructing the World opens up debate on central philosophical issues involving language, consciousness, knowledge, and reality. This major work by a leading philosopher will appeal to philosophers in all areas. This entry contains uncorrected proofs of front matter, chapter 1, and first excursus. (shrink)
The book is an extended study of the problem of consciousness. After setting up the problem, I argue that reductive explanation of consciousness is impossible , and that if one takes consciousness seriously, one has to go beyond a strict materialist framework. In the second half of the book, I move toward a positive theory of consciousness with fundamental laws linking the physical and the experiential in a systematic way. Finally, I use the ideas and arguments developed earlier to defend (...) a form of strong artificial intelligence and to analyze some problems in the foundations of quantum mechanics. (shrink)
This paper argues that higher-order doubt generates an epistemic dilemma. One has a higher-order doubt with regards to P insofar as one justifiably withholds belief as to what attitude towards P is justified. That is, one justifiably withholds belief as to whether one is justified in believing, disbelieving, or withholding belief in P. Using the resources provided by Richard Feldman’s recent discussion of how to respect one’s evidence, I argue that if one has a higher-order doubt with regards to P, (...) then one is not justified in having any attitude towards P. Otherwise put: No attitude towards the doubted proposition respects one’s higher-order doubt. I argue that the most promising response to this problem is to hold that when one has a higher-order doubt about P, the best one can do to respect such a doubt is to simply have no attitude towards P. Higher-order doubt is thus much more rationally corrosive than non-higher-order doubt, as it undermines the possibility of justifiably having any attitude towards the doubted proposition. (shrink)
"Throughout the centuries, moral philosophers, both Eastern and Western, considered a permanent and eternal law a necessary requirement for the formulation of a moral principle. If such a law was not empirically given, it had to be determined through reason. In contrast, early Buddhism presented a radical theory of impermanence. Interpreters of early Buddhism have been unable to abandon the presupposition of permanence, however, and hence have persisted in viewing nirvana or freedom as a permanent and eternal state to be (...) contrasted with the impermanent world of sensory experience and bondage. Ethics in Early Buddhism is David J. Kalupahana's balanced and brilliantly concise attempt to place the early Buddhist descriptions of the world of experience, the state of freedom, and the moral principle leading to such freedom within the framework of impermanence."--BOOK JACKET.Title Summary field provided by Blackwell North America, Inc. All Rights Reserved. (shrink)
There is a long tradition in philosophy of using a priori methods to draw conclusions about what is possible and what is necessary, and often in turn to draw conclusions about matters of substantive metaphysics. Arguments like this typically have three steps: first an epistemic claim , from there to a modal claim , and from there to a metaphysical claim.
One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question" -- consideration of the moral status of nonhuman animals. In this book, David Gunkel takes up the "machine question": whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. The machine question poses a (...) fundamental challenge to moral thinking, questioning the traditional philosophical conceptualization of technology as a tool or instrument to be used by human agents. Gunkel begins by addressing the question of machine moral agency: whether a machine might be considered a legitimate moral agent that could be held responsible for decisions and actions. He then approaches the machine question from the other side, considering whether a machine might be a moral patient due legitimate moral consideration. Finally, Gunkel considers some recent innovations in moral philosophy and critical theory that complicate the machine question, deconstructing the binary agent--patient opposition itself. Technological advances may prompt us to wonder if the science fiction of computers and robots whose actions affect their human companions could become science fact. Gunkel's argument promises to influence future considerations of ethics, ourselves, and the other entities who inhabit this world. (shrink)
In this book, David Stump traces alternative conceptions of the a priori in the philosophy of science and defends a unique position in the current debates over conceptual change and the constitutive elements in science. Stump emphasizes the unique epistemological status of the constitutive elements of scientific theories, constitutive elements being the necessary preconditions that must be assumed in order to conduct a particular scientific inquiry. These constitutive elements, such as logic, mathematics, and even some fundamental laws of nature, (...) were once taken to be a priori knowledge but can change, thus leading to a dynamic or relative a priori. Stump critically examines developments in thinking about constitutive elements in science as a priori knowledge, from Kant’s fixed and absolute a priori to Quine’s holistic empiricism. By examining the relationship between conceptual change and the epistemological status of constitutive elements in science, Stump puts forward an argument that scientific revolutions can be explained and relativism can be avoided without resorting to universals or absolutes. (shrink)
Why is two-dimensional semantics important? One can think of it as the most recent act in a drama involving three of the central concepts of philosophy: meaning, reason, and modality. First, Kant linked reason and modality, by suggesting that what is necessary is knowable a priori, and vice versa. Second, Frege linked reason and meaning, by proposing an aspect of meaning (sense) that is constitutively tied to cognitive signi?cance. Third, Carnap linked meaning and modality, by proposing an aspect of meaning (...) (intension) that is constitutively tied to possibility and necessity. (shrink)
This chapter analyzes aspects of the relationship between consciousness and intentionality. It focuses on the phenomenal character and the intentional content of perceptual states, canvassing various possible relations among them. It argues that there is a good case for a sort of representationalism, although this may not take the form that its advocates often suggest. By mapping out some of the landscape, the chapter tries to open up territory for different and promising forms of representationalism to be explored in the (...) future. In particular, it argues for a nonreductive, narrow, and Fregean variety of representationalism, which contrasts strongly with more widely explored varieties. It concludes with some words about the fundamental relationship between consciousness and intentionality. (shrink)
What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...) surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make. The key idea is that a machine that is more intelligent than humans will be better than humans at designing machines. So it will be capable of designing a machine more intelligent than the most intelligent machine that humans can design. So if it is itself designed by humans, it will be capable of designing a machine more intelligent than itself. By similar reasoning, this next machine will also be capable of designing a machine more intelligent than itself. If every machine in turn does what it is capable of, we should expect a sequence of ever more intelligent machines. This intelligence explosion is sometimes combined with another idea, which we might call the “speed explosion”. The argument for a speed explosion starts from the familiar observation that computer processing speed doubles at regular intervals. Suppose that speed doubles every two years and will do so indefinitely. Now suppose that we have human-level artificial intelligence 1 designing new processors. Then faster processing will lead to faster designers and an ever-faster design cycle, leading to a limit point soon afterwards. The argument for a speed explosion was set out by the artificial intelligence researcher Ray Solomonoff in his 1985 article “The Time Scale of Artificial Intelligence”.1 Eliezer Yudkowsky gives a succinct version of the argument in his 1996 article “Staring at the Singularity”: “Computing speed doubles every two subjective years of work.. (shrink)
Consciousness fits uneasily into our conception of the natural world. On the most common conception of nature, the natural world is the physical world. But on the most common conception of consciousness, it is not easy to see how it could be part of the physical world. So it seems that to find a place for consciousness within the natural order, we must either revise our conception of consciousness, or revise our conception of nature. In twentieth-century philosophy, this dilemma is (...) posed most acutely in C. D. Broad’s The Mind and its Place in Nature . The phenomena of mind, for Broad, are the phenomena of consciousness. The central problem is that of locating mind with respect to the physical world. Broad’s exhaustive discussion of the problem culminates in a taxonomy of seventeen different views of the mental-physical relation.1 On Broad’s taxonomy, a view might see the mental as nonexistent , as reducible, as emergent, or as a basic property of a substance . The physical might be seen in one of the same four ways. So a four-by-four matrix of views results. At the end, three views are left standing: those on which mentality is an emergent characteristic of either a physical substance or a neutral substance, where in the latter case, the physical might be either emergent or delusive. (shrink)
Does consciousness collapse the quantum wave function? This idea was taken seriously by John von Neumann and Eugene Wigner but is now widely dismissed. We develop the idea by combining a mathematical theory of consciousness (integrated information theory) with an account of quantum collapse dynamics (continuous spontaneous localization). Simple versions of the theory are falsified by the quantum Zeno effect, but more complex versions remain compatible with empirical evidence. In principle, versions of the theory can be tested by experiments with (...) quantum computers. The upshot is not that consciousness-collapse interpretations are clearly correct, but that there is a research program here worth exploring. (shrink)
In the Garden of Eden, we had unmediated contact with the world. We were directly acquainted with objects in the world and with their properties. Objects were simply presented to us without causal mediation, and properties were revealed to us in their true intrinsic glory.
Was human nature designed by natural selection in the Pleistocene epoch? The dominant view in evolutionary psychology holds that it was -- that our psychological adaptations were designed tens of thousands of years ago to solve problems faced by our hunter-gatherer ancestors. In this provocative and lively book, David Buller examines in detail the major claims of evolutionary psychology -- the paradigm popularized by Steven Pinker in The Blank Slate and by David Buss in The Evolution of Desire (...) -- and rejects them all. This does not mean that we cannot apply evolutionary theory to human psychology, says Buller, but that the conventional wisdom in evolutionary psychology is misguided.Evolutionary psychology employs a kind of reverse engineering to explain the evolved design of the mind, figuring out the adaptive problems our ancestors faced and then inferring the psychological adaptations that evolved to solve them. In the carefully argued central chapters of Adapting Minds, Buller scrutinizes several of evolutionary psychology's most highly publicized "discoveries," including "discriminative parental solicitude". Drawing on a wide range of empirical research, including his own large-scale study of child abuse, he shows that none is actually supported by the evidence.Buller argues that our minds are not adapted to the Pleistocene, but, like the immune system, are continually adapting, over both evolutionary time and individual lifetimes. We must move beyond the reigning orthodoxy of evolutionary psychology to reach an accurate understanding of how human psychology is influenced by evolution. When we do, Buller claims, we will abandon not only the quest for human nature but the very idea of human nature itself. (shrink)
Introduction -- Repression, ignorance, and undone science -- The epistemic dimension of the political opportunity structure -- The politics of meaning: from frames to design conflicts -- The organizational forms of counterpublic knowledge -- Institutional change, industrial transitions, and regime resistance politics -- Contemporary change: liberalization and epistemic modernization -- Conclusion.
Introduction: making the invisible visible -- The nobility of the material -- Research at war -- The guilded age of research -- The doctor as whistle-blower -- New rules for the laboratory -- Bedside ethics -- The doctor as stranger -- Life through death -- Commissioning ethics -- No one to trust -- New rules for the bedside -- Epilogue: The price of success.
The term ‘emergence’ often causesconfusion in science and philosophy, as it is used to express at leasttwo quite different concepts. We can label these concepts _strong_ _emergence_ and _weak emergence_. Both of these concepts are important, but it is vital to keep them separate.
In Kierkegaard’s Instant, David J. Kangas reads Kierkegaard to reveal his radical thinking about temporality. For Kierkegaard, the instant of becoming, in which everything changes in the blink of an eye, eludes recollection and anticipation. It constitutes a beginning always already at work. As Kangas shows, Kierkegaard’s retrieval of the sudden quality of temporality allows him to stage a deep critique of the idealist projects of Fichte, Schelling, and Hegel. By linking Kierkegaard’s thought to the tradition of Meister Eckhart, (...) Kangas formulates the central problem of these early texts and puts them into contemporary light—can thinking hold itself open to the challenges of temporality? (shrink)
The search for neural correlates of consciousness (or NCCs) is arguably the cornerstone in the recent resurgence of the science of consciousness. The search poses many difficult empirical problems, but it seems to be tractable in principle, and some ingenious studies in recent years have led to considerable progress. A number of proposals have been put forward concerning the nature and location of neural correlates of consciousness. A few of these include.
The Matrix presents a version of an old philosophical fable: the brain in a vat. A disembodied brain is floating in a vat, inside a scientist’s laboratory. The scientist has arranged that the brain will be stimulated with the same sort of inputs that a normal embodied brain receives. To do this, the brain is connected to a giant computer simulation of a world. The simulation determines which inputs the brain receives. When the brain produces outputs, these are fed back (...) into the simulation. The internal state of the brain is just like that of a normal brain, despite the fact that it lacks a body. From the brain’s point of view, things seem very much as they seem to you and me. (shrink)
The objects of credence are the entities to which credences are assigned for the purposes of a successful theory of credence. I use cases akin to Frege's puzzle to argue against referentialism about credence : the view that objects of credence are determined by the objects and properties at which one's credence is directed. I go on to develop a non-referential account of the objects of credence in terms of sets of epistemically possible scenarios.
*[[This paper is largely based on material in other papers. The first three sections and the appendix are drawn with minor modifications from Chalmers 2002c . The main ideas of the last three sections are drawn from Chalmers 1996, 1999, and 2002a, although with considerable revision and elaboration. ]].
Conscious experience is at once the most familiar thing in the world and the most mysterious. There is nothing we know about more directly than consciousness, but it is extraordinarily hard to reconcile it with everything else we know. Why does it exist? What does it do? How could it possibly arise from neural processes in the brain? These questions are among the most intriguing in all of science.
In recent years there has been an explosion of scientific work on consciousness in cognitive neuroscience, psychology, and other fields. It has become possible to think that we are moving toward a genuine scientific understanding of conscious experience. But what is the science of consciousness all about, and what form should such a science take? This chapter gives an overview of the agenda.
There has been much interest in the possibility of connectionist models whose representations can be endowed with compositional structure, and a variety of such models have been proposed. These models typically use distributed representations that arise from the functional composition of constituent parts. Functional composition and decomposition alone, however, yield only an implementation of classical symbolic theories. This paper explores the possibility of moving beyond implementation by exploiting holistic structure-sensitive operations on distributed representations. An experiment is performed using Pollack’s Recursive (...) Auto-Associative Memory. RAAM is used to construct distributed representations of syntactically structured sentences. A feed-forward network is then trained to operate directly on these representations, modeling syn- tactic transformations of the represented sentences. Successful training and generalization is obtained, demonstrating that the implicit structure present in these representations can be used for a kind of structure-sensitive processing unique to the connectionist domain. (shrink)
It is widely accepted that conscious experience has a physical basis. That is, the properties of experience (phenomenal properties, or qualia) systematically depend on physical properties according to some lawful relation. There are two key questions about this relation. The first concerns the strength of the laws: are they logically or metaphysically necessary, so that consciousness is nothing "over and above" the underlying physical process, or are they merely contingent laws like the law of gravity? This question about the strength (...) of the psychophysical link is the basis for debates over physicalism and property dualism. The second question concerns the shape of the laws: precisely how do phenomenal properties depend on physical properties? What sort of physical properties enter into the laws' antecedents, for instance; consequently, what sort of physical systems can give rise to conscious experience? It is this second question that I address in this paper. (shrink)
*[[This paper appears in _Toward a Science of Consciousness II: The Second Tucson Discussions and Debates_ (S. Hameroff, A. Kaszniak, and A.Scott, eds), published with MIT Press in 1998. It is a transcript of my talk at the second Tucson conference in April 1996, lightly edited to include the contents of overheads and to exclude some diversions with a consciousness meter. A more in-depth argument for some of the claims in this paper can be found in Chapter 6 of my (...) book _The Conscious Mind_ (Chalmers, 1996). ]]. (shrink)