Codes of ethics are being increasingly adopted in organizations worldwide, yet their effects on employee perceptions and behavior have not been thoroughly addressed. This study used a sample of 613 management accountants drawn from the United States to study the relationship between corporate and professional codes of ethics and employee attitudes and behaviors. The presence of corporate codes of ethics was associated with less perceived wrongdoing in organizations, but not with an increased propensity to report observed unethical behavior. Further, organizations (...) that adopted formal codes of ethics exhibited value orientations that went beyond financial performance to include responsibility to the commonweal. In contrast to corporate codes of ethics, professional codes of ethical conduct had no influence on perceived wrongdoing in organization nor these codes affect the propensity to report observed unethical activities. (shrink)
This article looks at Deleuze and Guattari's understanding of molecular biology, focusing particularly on their reading of two highly influential works by the eminent French molecular biologists François Jacob and Jacques Monod, La logique du vivant and Le hasard et la nécessité. In these two works, Jacob and Monod present the significance of molecular biology in broadly reductionist terms. What is more, the lac operon model of gene regulation that they propose serves to reinforce the so-called Central Dogma of molecular (...) biology, according to which information passes from DNA to RNA to proteins, with no reverse route. However, Deleuze and Guattari discover intensive potentials within the descriptions of molecular biology offered by both writers. It is argued that Jacob's work in particular, as it has developed in the years since the publication of La logique du vivant in 1970, has itself developed these intensive potentials. (shrink)
This article explores literary interrogations of the bioethical implications of cloning. It does so by outlining the basic science of cloning before going on to question the dominance of the Freudian notion of the ‘uncanny’ in the critical theoretical responses to cloning by figures such as Jean Baudrillard and Slavoj Žižek. The second half of the article turns to two recent novels exploring the theme of cloning: Eva Hoffman's The Secret, and Kazuo Ishiguro's Never Let Me Go. It is argued (...) that the former rehearses familiar themes of revulsion connected to the figure of the clone, yet resolves the struggle for identity in a ‘human’ conclusion; whereas the latter maintains the uncanny in-human difference of the clone even as it highlights the dangers of the biopolitical instrumentalization of life itself. The article therefore argues that fictional treatments of cloning can provide an important alternative to simplified debates on the subject in the mass media. (shrink)
This article considers the important contribution made by Edgar Morin and Henri Atlan – both members of the Groupe des Dix – to the theorisation of life and information. They have both played a major role in challenging the dominant determinist, mechanistic paradigm of molecular biology that emerged in the 1960s, and which has continued to influence thinking on biology up to the present day. The article will show how they explored key concepts, such as the principle of order from (...) noise, in order to reorient thinking on life and information in terms of a new paradigm of complexity that stood as a radical challenge to the determinist paradigm. A key insight in this respect was the relationship between life, information and meaning. Cet article porte sur les contributions décisives d’Edgar Morin et d’Henri Atlan dans le domaine des théories du vivant et de l’information. Ils ont tous deux joué un rôle déterminant dans la remise en cause de la formulation déterministe et mécaniste prédominante dans le cadre de la biologie moléculaire, qui émergea dans les années 1960 et qui continue encore aujourd’hui d’influencer la pensée sur la biologie. Cet article propose d’explorer l’usage qu’ils font de concepts-clés tels que le principe d’« ordre » qui émerge du « bruit », afin de réorienter le débat autour du nouveau paradigme de « complexité » qui représente un défi radical au concept déterministe. L’une des perspectives-clés en ce domaine fut la relation entre les concepts du « vivant », d’« information » et de « sens ». (shrink)
This book provides a comprehensive, systematic theory of moral responsibility. The authors explore the conditions under which individuals are morally responsible for actions, omissions, consequences, and emotions. The leading idea in the book is that moral responsibility is based on 'guidance control'. This control has two components: the mechanism that issues in the relevant behavior must be the agent's own mechanism, and it must be appropriately responsive to reasons. The book develops an account of both components. The authors go on (...) to offer a sustained defense of the thesis that moral responsibility is compatible with causal determinism. (shrink)
The modernist and scientific juxtaposition of object and subject are inappropriate when investigating the nature of “knowledge.” This commentary argues that the usual methodological dichotomy fails when it is applied to the domain of “knowledge.” The two instead coalesce within the topic itself, demanding the most careful self-awareness.
Beliefs are concrete particulars containing ideas of properties and notions of things, which also are concrete. The claim made in a belief report is that the agent has a belief (i) whose content is a specific singular proposition, and (ii) which involves certain of the agent's notions and ideas in a certain way. No words in the report stand for the notions and ideas, so they are unarticulated constituents of the report's content (like the relevant place in "it's raining"). The (...) belief puzzles (Hesperus, Cicero, Pierre) involve reports about two different notions. So the analysis gets the puzzling truth values right. (shrink)
Contemporary virtue epistemology (hereafter ‘VE’) is a diverse collection of approaches to epistemology. At least two central tendencies are discernible among the approaches. First, they view epistemology as a normative discipline. Second, they view intellectual agents and communities as the primary focus of epistemic evaluation, with a focus on the intellectual virtues and vices embodied in and expressed by these agents and communities. -/- This entry introduces many of the most important results of the contemporary VE research program. These include (...) novel attempts to resolve longstanding disputes, solve perennial problems, grapple with novel challenges, and expand epistemology’s horizons. In the process, it reveals the diversity within VE. Beyond sharing the two unifying commitments mentioned above, its practitioners diverge over the nature of intellectual virtues, which questions to ask, and which methods to use. -/- It will be helpful to note some terminology before proceeding. First, we use ‘cognitive’, ‘epistemic’ and ‘intellectual’ synonymously. Second, we often use ‘normative’ broadly to include not only norms and rules, but also duties and values. Finally, ‘practitioners’ names contemporary virtue epistemologists. (shrink)
Transparency and openness are broadly endorsed in energy and environmental modelling and analysis, but too little attention is given to the transparency of value-laden assumptions. Current practices for transparency focus on making model source code and data available, documenting key equations and parameter values, and ensuring replicability of results. We argue that, even when followed, these guidelines are insufficient for achieving deep transparency, in the sense that results often remain driven by implicit value-laden assumptions that are opaque to other modellers (...) and researchers, and that may not be understood by wider audiences to be controversial. This paper identifies additional best practices for achieving transparency by highlighting issues where disagreement over value judgements will persist for the foreseeable future. Recommendations for deepening transparency are developed by learning from successes and ongoing challenges represented in three case studies. We provide recommendations to accelerate the adoption of additional best practices for deepening transparency of energy and environmental modelling in policy-relevant domains, increasing stakeholder participation with non-modellers, and encouraging interdisciplinary dialogue. (shrink)
The Repugnant Conclusion served an important purpose in catalyzing and inspiring the pioneering stage of population ethics research. We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature.
This unique text focuses on ethical puzzles and hypothetical problems to help students at all levels understand and refine their moral principles and see how they apply to various situations. An extensive, thoughtfully written introduction provides the theoretical background and lays out numerous moral puzzle cases that are analyzed and discussed throughout the text. Challenging follow-up articles argue a variety of stances on the ethical puzzles set forth in the introduction.
Mobile devices with health apps, direct-to-consumer genetic testing, crowd-sourced information, and other data sources have enabled research by new classes of researchers. Independent researchers, citizen scientists, patient-directed researchers, self-experimenters, and others are not covered by federal research regulations because they are not recipients of federal financial assistance or conducting research in anticipation of a submission to the FDA for approval of a new drug or medical device. This article addresses the difficult policy challenge of promoting the welfare and interests of (...) research participants, as well as the public, in the absence of regulatory requirements and without discouraging independent, innovative scientific inquiry. The article recommends a series of measures, including education, consultation, transparency, self-governance, and regulation to strike the appropriate balance. (shrink)
This is the first book to use complexity theory to open up the 'geophilosophy' developed by Gilles Deleuze and Felix Guattari in A Thousand Plateaus, Anti-Oedipus and What is Philosophy'. Written by a philosopher and a geographer in a clear style, with a practical orientation andinterdisciplinary focus, the Guide enables readers to grasp the basics of complexity theory (the study of self-organisation and emergence in material systems), while the Glossary eases the difficulty of applying this science to Deleuze and Guattari's (...) often perplexing terminology. Deleuze and Geophilosophy is thoroughly pragmatic: it asks not what the earth means, but how it works. It provides a common conceptual framework within which physical and human geographers can work together alongside other social scientists, cultural studies practitioners, and philosophers ininterdisciplinary teams to explore the entangled flows, lines, grids, and spaces of our world. The book will be of interest to all those working in disciplines at the intersections of culture, nature, space, and history: anthropology, art and architecture theory, communication studies, geography,Marxism and historical materialism, philosophy, postcolonial theory, urban studies, and many other disciplines. (shrink)
The journal of Cognitive Computation is defined in part by the notion that biologically inspired computational accounts are at the heart of cognitive processes in both natural and artificial systems. Many studies of various important aspects of cognition (memory, observational learning, decision making, reward prediction learning, attention control, etc.) have been made by modelling the various experimental results using ever-more sophisticated computer programs. In this manner progressive inroads have been made into gaining a better understanding of the many components of (...) cognition. Concomitantly in both science and science fiction the hope is periodically re-ignited that a manmade system can be engineered to be fully cognitive and conscious purely in virtue of its execution of an appropriate computer program. However, whilst the usefulness of the computational metaphor in many areas of psychology and neuroscience is clear, it has not gone unchallenged and in this article I will review a group of philosophical arguments that suggest either such unequivocal optimism in computationalism is misplaced—computation is neither necessary nor sufficient for cognition—or panpsychism (the belief that the physical universe is fundamentally composed of elements each of which is conscious) is true. I conclude by highlighting an alternative metaphor for cognitive processes based on communication and interaction. (shrink)
For most of the history of prejudice research, negativity has been treated as its emotional and cognitive signature, a conception that continues to dominate work on the topic. By this definition, prejudice occurs when we dislike or derogate members of other groups. Recent research, however, has highlighted the need for a more nuanced and (Eagly 2004) perspective on the role of intergroup emotions and beliefs in sustaining discrimination. On the one hand, several independent lines of research have shown that unequal (...) intergroup relations are often marked by attitudinal complexity, with positive responses such as affection and admiration mingling with negative responses such as contempt and resentment. Simple antipathy is the exception rather than the rule. On the other hand, there is mounting evidence that nurturing bonds of affection between the advantaged and the disadvantaged sometimes entrenches rather than disrupts wider patterns of discrimination. Notably, prejudice reduction interventions may have ironic effects on the political attitudes of the historically disadvantaged, decreasing their perceptions of injustice and willingness to engage in collective action to transform social inequalities. (shrink)
In this brief concluding chapter we first wish to present the overall argument of the book in a concise, nontechnical way. We hope this will provide a clear view of the argument. We shall then point to some of the distinctive--and attractive--features of our approach. Finally, we shall offer some preliminary thoughts about extending the account of moral responsibility to apply to emotions.
The most cursory examination of the history of artificial intelligence highlights numerous egregious claims of its researchers, especially in relation to a populist form of ‘strong’ computationalism which holds that any suitably programmed computer instantiates genuine conscious mental states purely in virtue of carrying out a specific series of computations. The argument presented herein is a simple development of that originally presented in Putnam’s (Representation & Reality, Bradford Books, Cambridge in 1988 ) monograph, “Representation & Reality”, which if correct, has (...) important implications for turing machine functionalism and the prospect of ‘conscious’ machines. In the paper, instead of seeking to develop Putnam’s claim that, “everything implements every finite state automata”, I will try to establish the weaker result that, “everything implements the specific machine Q on a particular input set ( x )”. Then, equating Q ( x ) to any putative AI program, I will show that conceding the ‘strong AI’ thesis for Q (crediting it with mental states and consciousness) opens the door to a vicious form of panpsychism whereby all open systems, (e.g. grass, rocks etc.), must instantiate conscious experience and hence that disembodied minds lurk everywhere. (shrink)
The Argument Web is maturing as both a platform built upon a synthesis of many contemporary theories of argumentation in philosophy and also as an ecosystem in which various applications and application components are contributed by different research groups around the world. It already hosts the largest publicly accessible corpora of argumentation and has the largest number of interoperable and cross compatible tools for the analysis, navigation and evaluation of arguments across a broad range of domains, languages and activity types. (...) Such interoperability is key in allowing innovative combinations of tool and data reuse that can further catalyse the development of the field of computational argumentation. The aim of this paper is to summarise the key foundations, the recent advances and the goals of the Argument Web, with a particular focus on demonstrating the relevance to, and roots in, philosophical argumentation theory. (shrink)