Emerging technologies are increasingly used in an attempt to “enhance the human body and/or mind” beyond the contemporary standards that characterize human beings. Yet, such standards are deeply controversial and it is not an easy task to determine whether the application of a given technology to an individual and its outcome can be defined as a human enhancement or not. Despite much debate on its potential or actual ethical and social impacts, human enhancement is not subject to any consensual definition. (...) This paper proposes a timely and much needed examination of the various definitions found in the literature. We classify these definitions into four main categories: the implicit approach, the therapy-enhancement distinction, the improvement of general human capacities and the increase of well-being. After commenting on these different approaches and their limitations, we propose a definition of human enhancement that focuses on individual perceptions. While acknowledging that a definition that mainly depends on personal and subjective individual perceptions raises many challenges, we suggest that a comprehensive approach to define human enhancement could constitute a useful premise to appropriately address the complexity of the ethical and social issues it generates. (shrink)
Cet article relate l’expérience vécue d’une transition professionnelle entre la biologie et la bioéthique. Il apparaît qu’un des éléments clé de cette transition réside dans une compréhension épistémologique de cette discipline : la tâche principale du bioéthicien est moins de développer des réflexions éthiques que de borner et de comprendre le contexte général dans lequel sont ancrées les problématiques sur lesquelles il travaille.
Vincent Descombes is a French philosopher. He has taught at the University of Montréal, Johns Hopkins University, and Emory University. Presently, he is director of studies at the École des Hautes Études en Sciences Sociales, Paris, and regular visiting professor at the University of Chicago in the Department of Romance. Descombes’s main areas of research are in the philosophy of mind, philosophy of language and philosophy of literature. The following interview covers various aspects of his research in the philosophy (...) of mind and language: semantic anti-realism, phenomenology, the content of mental states, description and transparency, the linguistic turn, metaphysics and linguistic analysis, fictional names and animal intentionality. (shrink)
Mainstream and Formal Epistemology provides the first, easily accessible, yet erudite and original analysis of the meeting point between mainstream and formal theories of knowledge. These two strands of thinking have traditionally proceeded in isolation from one another, but in this book, Vincent F. Hendricks brings them together for a systematic comparative treatment. He demonstrates how mainstream and formal epistemology may significantly benefit from one another, paving the way for a new unifying program of 'plethoric' epistemology. His book will (...) both define and further the debate between philosophers from two very different sides of the epistemological spectrum. (shrink)
This is a critical introduction to modern French philosophy, commissioned from one of the liveliest contemporary practitioners and intended for an English-speaking readership. The dominant 'Anglo-Saxon' reaction to philosophical development in France has for some decades been one of suspicion, occasionally tempered by curiosity but more often hardening into dismissive rejection. But there are signs now of a more sympathetic interest and an increasing readiness to admit and explore shared concerns, even if these are still expressed in a very different (...) idiom and intellectual context. Vincent Descombes offers here a personal guide to the main movements and figures of the last forty-five years. He traces over this period the evolution of thought from a generation preoccupied with the 'three H's' - Hegel, Husserl and Heidegger, to a generation influenced since about 1960 by the 'three masters of suspicion' - Marx, Nietzsche and Freud. In this framework he deals in turn with the thought of Sartre, Merleau-Ponty, the early structuralists, Foucault, Althusser, Serres, Derrida, and finally Deleuze and Lyotard. The 'internal' intellectual history of the period is related to its institutional setting and the wider cultural and political context which has given French philosophy so much of its distinctive character. (shrink)
Religious believers understand the meaning of their lives in the light of the way in which they are related to God. Life is significant because it is lived in the presence of God, and ultimate bliss consists in being in the right relation with God. Through sin, however, our relationship with God has been drastically disrupted. The fundamental religious issue which we all have to face, therefore, is how this relationship can be restored. How can we attain ultimate bliss by (...) being reconciled with God? Basically, this is the issue with which the doctrine of atonement has to deal: The English word ‘atonement’ is derived from the words ‘at-one-ment’, to make two parties at one, to reconcile two parties one to another. It means essentially reconciliation… In current usage, the phrase ‘to atone for’ means the undertaking of a course of action designed to undo the consequences of a wrong act with a view to the restoration of the relationship broken by the wrong act. (shrink)
Black Natural Law offers a new way of understanding the African American political tradition, and it argues that this tradition has collapsed into incoherence. Vincent William Lloyd revives Black politics by telling stories of its central figures in a way that exhibits the connections between their religious, philosophical, and political ideas.
Offering an overall insight into the French tradition of philosophy of technology, this volume is meant to make French-speaking contributions more accessible to the international philosophical community. The first section, “Negotiating a Cultural Heritage,” presents a number of leading 20th century philosophical figures and intellectual movements that help shape philosophy of technology in the Francophone area, and feed into contemporary debates. The second section, “Coining and Reconfiguring Technoscience,” traces the genealogy of this controversial concept and discusses its meanings and relevance. (...) A third section, “Revisiting Anthropological Categories,” focuses on the relationships of technology with the natural and the human worlds from various perspectives that include anthropotechnology, Anthropocene, technological and vital norms and temporalities. The final section, “Innovating in Ethics, Design and Aesthetics,” brings together contributions that draw on various French traditions to afford fresh insights on ethics of technology, philosophy of design, techno-aesthetics and digital studies. The contributions in this volume are vivid and rich in original approaches that can spur exchanges and debates with other philosophical traditions. (shrink)
How do people make sense of their experiences? How do they understand possibility? How do they limit possibility? These questions are central to all the human sciences. Here, Vincent Crapanzano offers a powerfully creative new way to think about human experience: the notion of imaginative horizons. For Crapanzano, imaginative horizons are the blurry boundaries that separate the here and now from what lies beyond, in time and space. These horizons, he argues, deeply influence both how we experience our lives (...) and how we interpret those experiences, and here sets himself the task of exploring the roles that creativity and imagination play in our experience of the world. (shrink)
I am honoured and pleased to address you this evening on the life and work of an extraordinary American thinker, Charles Sanders Peirce. Although Peirce is perhaps most often remembered as the father of the philosophical movement known as pragmatism, I would like to impress upon you that he was also, and perhaps, especially, a logician, a working scientist and a mathematician. During his life time Peirce most often referred to himself, and was referred to by his colleagues, as a (...) logician. Furthermore, Peirce spent thirty years actively engaged in scientific research for the US Coast Survey. The National Archives in Washington, DC, holds some five thousand pages of Peirce's reports on this work. Finally, the four volumes of Peirce's mathematical papers edited by Professor Carolyn Eisele eloquently testify to his contributions to that field as well. (shrink)
"This book provides new interpretations of Heidegger's philosophical method in light of 20th-century postmodernism and 21st-century speculative realism. In doing so, it raises important questions about philosophical method in the age of global warming and climate change. Vincent Blok addresses topics that have yet to be extensively discussed in Heidegger scholarship, including Heidegger's method of questioning, the religious character of Heidegger's philosophical method and Heidegger's conceptualization of philosophical method as explorative confrontation. He is also critical of Heidegger's conceptuality and (...) develops a post-Heideggerian concept of philosophical method, which provides a new perspective on the role of willing, poetry and earth-interest in contemporary philosophy. This earth-interest turns out to be particularly important to consider and leads to critical reflections on Heidegger's concept of Earth, the necessity of Earth-interest in contemporary philosophy and a post-Heideggerian concept of the Earth. Heidegger's Concept of Philosophical Method will be of interest primarily to Heidegger scholars and graduate students, but its discussion of philosophical method and environmental philosophy will also appeal to scholars in other disciplines and areas of philosophy"--. (shrink)
This open access book looks at how a democracy can devolve into a post-factual state. The media is being flooded by populist narratives, fake news, conspiracy theories and make-believe. Misinformation is turning into a challenge for all of us, whether politicians, journalists, or citizens. In the age of information, attention is a prime asset and may be converted into money, power, and influence – sometimes at the cost of facts. The point is to obtain exposure on the air and in (...) print media, and to generate traffic on social media platforms. With information in abundance and attention scarce, the competition is ever fiercer with truth all too often becoming the first victim. Reality Lost: Markets of Attention, Misinformation and Manipulation is an analysis by philosophers Vincent F. Hendricks and Mads Vestergaard of the nuts and bolts of the information market, the attention economy and media eco-system which may pave way to postfactual democracy. Here misleading narratives become the basis for political opinion formation, debate, and legislation. To curb this development and the threat it poses to democratic deliberation, political self-determination and freedom, it is necessary that we first grasp the mechanisms and structural conditions that cause it. (shrink)
The philosophy of perception currently considers how perception relates to action. Some distinctions may help, distinguishing object perception from perceptual recognition, and both from that-perception. Examples are seeing a man, recognising a man, and seeing that there is a man. Perceiving an object controls self-location by its recognising an object, which depends on memory of how it looks, controls looking for it and interacting with it, or not, and that-perceiving controls saying that an object exists. Perception controls action. Milner and (...) Goodale, Jacob and Jeannerod, and Noë are considered. (shrink)
This paper offers a critical reconsideration of the traditional doctrine that responsibility for a crime requires a voluntary act. I defend three general propositions: first, that orthodox Anglo-American criminal theory fails to explain adequately why criminal responsibility requires an act. Second, when it comes to the just definition of crimes, the act requirement is at best a rough generalization rather than a substantive limiting principle. Third, that the intuition underlying the so-called “act requirement” is better explained by what I call (...) the “practical-agency condition,” according to which punishment in a specific instance is unjust unless the crime charged was caused or constituted by the agent's conduct qua practically rational agent. The practical-agency condition is defended as a reconstruction of what is worth retaining in Anglo-American criminal law's traditional notion of an “act requirement.”. (shrink)
Religious believers understand the meaning of their lives and of the world in terms of the way these are related to God. How, Vincent BrU;mmer asks, does the model of love apply to this relationship? He shows that most views on love take it to be an attitude rather than a relationship: exclusive attention (Ortega y Gasset), ecstatic union (nuptial mysticism), passionate suffering (courtly love), need-love (Plato, Augustine) and gift-love (Nygren). In discussing the issues, BrU;mmer inquires what role these (...) attitudes play within the love-relationship and examines the implications of using the model of love as a key paradigm in theology. (shrink)
Adopting a broadly compatibilist approach, this volume's authors argue that the behavioral and mind sciences do not threaten the moral foundations of legal responsibility. Rather, these sciences provide fresh insight into human agency and updated criteria as well as powerful diagnostic and intervention tools for assessing and altering minds.
This book provides a valuable look at the work of up and coming epistemologists. The topics covered range from the central issues of mainstream epistemology to the more formal issues in epistemic logic and confirmation theory. This book should be read by anyone interested in seeing where epistemology is currently focused and where it is heading. - Stewart Cohen , Arizona State University..
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 (...) - - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
In this article, the Heidegger and Derrida controversy about the nature of questioning is revisited in order to rehabilitate questioning as an essential characteristic of contemporary philosophy. After exploring Heidegger's characterization of philosophy as questioning and Derrida's criticism of the primacy of questioning, we will evaluate Derrida's criticism and articulate three characteristics of Heidegger's concept of questioning. After our exploration of Heidegger's concept of questioning, we critically evaluate Heidegger's later rejection of questioning. With this, we not only contribute to the (...) discussion about why Heidegger rejected questioning in his later thought and whether this rejection is legitimized, but also to the rehabilitation of questioning in contemporary philosophy. (shrink)
Primitive ontology is a recently much discussed approach to the ontology of quantum theory according to which the theory is ultimately about entities in 3-dimensional space and their temporal evolution. This paper critically discusses the primitive ontologies that have been suggested within the Bohmian approach to quantum field theory in the light of the existence of unitarily inequivalent representations. These primitive ontologies rely either on a Fock space representation or a wave functional representation, which are strictly speaking unambiguously available only (...) for free systems in flat spacetime. As a consequence, it is argued that they do not constitute fundamental ontologies for quantum field theory, in contrast to the case of the Bohmian approach to quantum mechanics. (shrink)
Theories of quantum gravity generically presuppose or predict that the reality underlying relativistic spacetimes they are describing is significantly non-spatiotemporal. On pain of empirical incoherence, approaches to quantum gravity must establish how relativistic spacetime emerges from their non-spatiotemporal structures. We argue that in order to secure this emergence, it is sufficient to establish that only those features of relativistic spacetimes functionally relevant in producing empirical evidence must be recovered. In order to complete this task, an account must be given of (...) how the more fundamental structures instantiate these functional roles. We illustrate the general idea in the context of causal set theory and loop quantum gravity, two prominent approaches to quantum gravity. (shrink)
In Charles S. Peirce: On Norms and Ideals, Potter argues that Peirce's doctrine of the normative sciences is essential to his pragmatism. No part of Peirce's philosophy is bolder than his attempt to establish esthetics, ethics, and logic as the three normative sciences and to argue for the priority of esthetics among the trio. Logic, Potter cites, is normative because it governs thought and aims at truth; ethics is normative because it analyzes the ends to which thought should be directed; (...) esthetics is normative and fundamental because it considers what it means to be an end or something good in itself. This study shows that Peirce took seriously the trinity of normative sciences and demonstrates that these categories apply both to the conduct of man and to the workings of the cosmos. (shrink)
Formal Philosophy is a collection of short interviews based on 5 questions presented tosome of the most influential and prominent scholars in formal philosophy.
Invited papers from PT-AI 2011. - Vincent C. Müller: Introduction: Theory and Philosophy of Artificial Intelligence - Nick Bostrom: The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents - Hubert L. Dreyfus: A History of First Step Fallacies - Antoni Gomila, David Travieso and Lorena Lobo: Wherein is Human Cognition Systematic - J. Kevin O'Regan: How to Build a Robot that Is Conscious and Feels - Oron Shagrir: Computation, Implementation, Cognition.
How do people make sense of their experiences? How do they understand possibility? How do they limit possibility? These questions are central to all the human sciences. Here, Vincent Crapanzano offers a powerfully creative new way to think about human experience: the notion of imaginative horizons. For Crapanzano, imaginative horizons are the blurry boundaries that separate the here and now from what lies beyond, in time and space. These horizons, he argues, deeply influence both how we experience our lives (...) and how we interpret those experiences, and here sets himself the task of exploring the roles that creativity and imagination play in our experience of the world. (shrink)
"The development of modern diagnostic neuroimaging techniques led to discoveries about the human brain and mind that helped give rise to the field of neurolaw. This new interdisciplinary field has led to novel directions in analytic jurisprudence and philosophy of law by providing an empirically-informed platform from which scholars have reassessed topics such as mental privacy and self-determination, responsibility and its relationship to mental disorders, and the proper aims of the criminal law. Similarly, the development of neurointervention techniques that promise (...) to deliver new ways of altering people's minds creates opportunities and challenges that raise important and rich conceptual, moral, jurisprudential, and scientific questions. The specific purpose of this volume is to make a contribution to the field of neurolaw by investigating the legal issues raised by the development and use of neurointerventions "--. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues (...) this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated to (...) examining the risks of AI. The book evaluates predictions of the future of AI, proposes ways to ensure that AI systems will be beneficial to humans, and then critically evaluates such proposals. 1 Vincent C. Müller, Editorial: Risks of Artificial Intelligence - 2 Steve Omohundro, Autonomous Technology and the Greater Human Good - 3 Stuart Armstrong, Kaj Sotala and Sean O’Heigeartaigh, The Errors, Insights and Lessons of Famous AI Predictions - and What they Mean for the Future - 4 Ted Goertzel, The Path to More General Artificial Intelligence - 5 Miles Brundage, Limitations and Risks of Machine Ethics - 6 Roman Yampolskiy, Utility Function Security in Artificially Intelligent Agents - 7 Ben Goertzel, GOLEM: Toward an AGI Meta-Architecture Enabling Both Goal Preservation and Radical Self-Improvement - 8 Alexey Potapov and Sergey Rodionov, Universal Empathy and Ethical Bias for Artificial General Intelligence - 9 András Kornai, Bounding the Impact of AGI - 10 Anders Sandberg, Ethics and Impact of Brain Emulations 11 Daniel Dewey, Long-Term Strategies for Ending Existential Risk from Fast Takeoff - 12 Mark Bishop, The Singularity, or How I Learned to Stop Worrying and Love AI -. (shrink)
This book reports on the results of the third edition of the premier conference in the field of philosophy of artificial intelligence, PT-AI 2017, held on November 4 - 5, 2017 at the University of Leeds, UK. It covers: advanced knowledge on key AI concepts, including complexity, computation, creativity, embodiment, representation and superintelligence; cutting-edge ethical issues, such as the AI impact on human dignity and society, responsibilities and rights of machines, as well as AI threats to humanity and AI safety; (...) and cutting-edge developments in techniques to achieve AI, including machine learning, neural networks, dynamical systems. The book also discusses important applications of AI, including big data analytics, expert systems, cognitive architectures, and robotics. It offers a timely, yet very comprehensive snapshot of what is going on in the field of AI, especially at the interfaces between philosophy, cognitive science, ethics and computing. (shrink)
In his Institutes 2.2.5 Calvin declares that he ‘willingly accepts’ the distinction between freedom from necessity, from sin and from misery originally developed by St Bernard. It is remarkable that a determinist like Calvin seems here to accept a libertarian view of human freedom. In this paper I set out Bernard's doctrine of the three kinds of freedom and show that all its basic elements can in fact be found in Calvin's argument in chapters 2 and 3 of the Institutes (...) part II. Towards the end of chapter 3, however, Calvin's doctrine of ‘perseverance’ makes him revert to a deterministic view of the divine-human relationship. I show that the considerations which prompt Calvin to this can be adequately met on the basis of Bernard's libertarian concept of human freedom. (shrink)
In his paper Hampus Lyttkens tries to explore the relation between religious experience and the concept of transcendence. Lyttkens limits his enquiry to religious experience in the sense of ‘specific and extraordinary psychic experiences’ which are interpreted as experiences of a transcendent God. By ‘transcendence’ Lyttkens means more than ‘objective reference’. The object of religious experiences in the above sense is not only claimed to transcend the experience itself, in the sense in which the external world is claimed to transcend (...) our perception of it. It is also claimed to be transcendent with respect to the spatio-temporal world as such, by existing ‘beyond space and time’ in some sense or other, or in the sense put forward by Karl Heim, as existing in an extra dimension beside the four dimensions of space and time. (shrink)
The main research programs in quantum gravity tend to suggest in one way or another that most spacetime structures are not fundamental. At the same time, work in quantum foundations highlights fundamental features that are in tension with any straightforward space- time understanding. This paper aims to explore the little investigated but potentially fruitful links between these two fields.
Ernst Bloch is perhaps best known for his subtle and imaginative investigation of utopias and utpoianism, but his work also provides a comprehensive and insightful analysis of Western culture, politics and society. Yet, because he has not been one of the easiest writers to read, his full contribution has not been widely acknowledged. In this critical and accessible introduction to one of the most fascinating thinkers of the twentieth century, Vincent Geoghegan unravels much of the mystery of the man (...) and his ideas. (shrink)
Although research into fair and alternative trade networks has increased significantly in recent years, very little synthesis of the literature has occurred thus far, especially for social considerations such as gender, health, labor, and equity. We draw on insights from critical theorists to reflect on the current state of fair and alternative trade, draw out contradictions from within the existing research, and suggest actions to help the emancipatory potential of the movement. Using a systematic scoping review methodology, this paper reviews (...) 129 articles and reports that discuss the social dimensions of fair and alternative trade experienced by Southern agricultural producers and workers. The results highlight gender, health, and labor dimensions of fair and alternative trade systems and suggest that diverse groups of producers and workers may be experiencing related inequities. By bringing together issues that are often only tangentially discussed in individual studies, the review gives rise to a picture that suggests that research on these issues is both needed and emerging. We end with a summary of key findings and considerations for future research and action. (shrink)
The French composer, Hector Berlioz, reacted as follows to the critics of his opera La Damnation de Faust : ‘I have already recounted how I … wrote the march on the Hungarian theme of Rákóczy in the course of one night. The passionate reception that this march received in Pest made me decide to include it in my Faust, and in doing so I took the liberty to use Hungary as the setting for the opening of the action, and had (...) my hero, deep in reflection, see a Hungarian army passing across the plain. A German critic considered it most remarkable that I should portray Faust in such a manner. I do not see why I should not have done that, and I would, without hesitation, have let him travel to any place whatsoever if it had been to the benefit of the music I was writing. I had not set myself the task of blindly following Goethe's framework, and a character like Faust can be portrayed as making the most outlandish journeys without doing harm in anyway to the credibility of his person. When other German critics … attacked me even more strongly for the departures that my libretto made from the text and the structure of Goethe's Faust, … I wondered why these critics had not reproached me in any way for the libretto of my symphony, Roméo et Juliette, which only shows a slight resemblance to Shakespeare's immortal tragedy! Obviously because Shakespeare was not a German. Chauvinists! Fetishists! Imbeciles! (shrink)
This short work shows how systematic theology is itself a philosophical enterprise. After analyzing the nature of philosophical enquiry and its relation to systematic theology, and after explaining how theology requires that we talk about God, Vincent BrU;mmer illustrates how philosophical analysis can help in dealing with various conceptual problems involved in the fundamental Christian claim that God is a personal being with whom we may live in a personal relationship.
Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...) from humans; in fact they increase the abil-ity to hold humans accountable for war crimes. Using LAWS in war would probably reduce human suffering overall. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict—especially as compared to extant remote-controlled weapons. The widespread fear of killer robots is unfounded: They are probably good news. (shrink)