Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial intelligence, the authors argue (...) that even if full moral agency for machines is a long way off, it is already necessary to start building a kind of functional morality, in which artificial moral agents have some basic ethical sensitivity. But the standard ethical theories don't seem adequate, and more socially engaged and engaging robots will be needed. As the authors show, the quest to build machines that are capable of telling right from wrong has begun. -/- Moral Machines is the first book to examine the challenge of building artificial moral agents, probing deeply into the nature of human decision making and ethics. (shrink)
Replication or even modelling of consciousness in machines requires some clarifications and refinements of our concept of consciousness. Design of, construction of, and interaction with artificial systems can itself assist in this conceptual development. We start with the tentative hypothesis that although the word “consciousness” has no well-defined meaning, it is used to refer to aspects of human and animal informationprocessing. We then argue that we can enhance our understanding of what these aspects might be by designing and building virtual- (...) class='Hi'>machine architectures capturing various features of consciousness. This activity may in turn nurture the development of our concepts of consciousness, showing how an analysis based on information-processing virtual machines answers old philosophical puzzles as well enriching empirical theories. This process of developing and testing ideas by developing and testing designs leads to gradual refinement of many of our pre-theoretical concepts of mind, showing how they can be construed as implicitly “architecture-based” concepts. Understanding how humanlike robots with appropriate architectures are likely to feel puzzled about qualia may help us resolve those puzzles. The concept of “qualia” turns out to be an “architecture-based” concept, while individual qualia concepts are “architecture-driven”. (shrink)
In this article, Lucas maintains the falseness of Mechanism - the attempt to explain minds as machines - by means of Incompleteness Theorem of Gödel. Gödel’s theorem shows that in any system consistent and adequate for simple arithmetic there are formulae which cannot be proved in the system but that human minds can recognize as true; Lucas points out in his turn that Gödel’s theorem applies to machines because a machine is the concrete instantiation of a formal system: therefore, (...) for every machine consistent and able of doing simple arithmetic, there is a formula that it can’t produce as true but that we can see to be true, and so human minds and machines have to be different. Lucas considers as well in this article some possible objections to his argument: for any Gödelian formula we could, for instance, construct a machine able to produce it or we could put the Gödelian formulae that we had proved as axioms of a further machine. However - as Lucas underlines - for every of such machines we could again formulate another Gödelian formula, the Gödelian formula of these machines, that they are not able to proof but that we can recognize as true. More general arguments, such as the possibility to escape Gödelian argument by suggesting that Gödel’s theorem applies to consistent systems while we could be inconsistent ones, are moreover refuted by Lucas by maintaining that our inconsistency corresponds to occasional malfunctioning of a machine and not to his normal inconsistency; indeed, a inconsistent machine is characterized by producing any statement, on the contrary human being are selective and not disposed to assert anything. (shrink)
Though it did not yet exist as a discrete field of scientific inquiry, biology was at the heart of many of the most important debates in seventeenth-century philosophy. Nowhere is this more apparent than in the work of G. W. Leibniz. In Divine Machines, Justin Smith offers the first in-depth examination of Leibniz's deep and complex engagement with the empirical life sciences of his day, in areas as diverse as medicine, physiology, taxonomy, generation theory, and paleontology. He shows how these (...) wide-ranging pursuits were not only central to Leibniz's philosophical interests, but often provided the insights that led to some of his best-known philosophical doctrines.Presenting the clearest picture yet of the scope of Leibniz's theoretical interest in the life sciences, Divine Machines takes seriously the philosopher's own repeated claims that the world must be understood in fundamentally biological terms. Here Smith reveals a thinker who was immersed in the sciences of life, and looked to the living world for answers to vexing metaphysical problems. He casts Leibniz's philosophy in an entirely new light, demonstrating how it radically departed from the prevailing models of mechanical philosophy and had an enduring influence on the history and development of the life sciences. Along the way, Smith provides a fascinating glimpse into early modern debates about the nature and origins of organic life, and into how philosophers such as Leibniz engaged with the scientific dilemmas of their era. (shrink)
Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
Most philosophers appear to have ignored the distinction between the broad concept of Virtual Machine Functionalism (VMF) described in Sloman&Chrisley (2003) and the better known version of functionalism referred to there as Atomic State Functionalism (ASF), which is often given as an explanation of what Functionalism is, e.g. in Block (1995). -/- One of the main differences is that ASF encourages talk of supervenience of states and properties, whereas VMF requires supervenience of machines that are arbitrarily complex networks of (...) causally interacting (virtual, but real) processes, possibly operating on different time-scales, examples of which include many different procesess usually running concurrently on a modern computer performing various tasks concerned with handling interfaces to physical devices, managing the file system, dealing with security, providing tools, entertainments, and games, and possibly processing research data. Another example of VMF would be the kind of functionalism involved in a large collection of possibly changing socio-economic structures and processes interacting in a complex community, and yet another is illustrated by the kind of virtual machinery involved in the many levels of visual processing of information about spatial structures, processes, and relationships (including percepts of moving shadows, reflections, highlights, optical-flow patterns and changing affordances) as you walk through a crowded car-park on a sunny day: generating a whole zoo of interacting qualia. (Forget solitary red patches, or experiences thereof.) -/- Perhaps VMF should be re-labelled "Virtual MachinERY Functionalism" because the word 'machinery' more readily suggests something complex with interacting parts. VMF is concerned with virtual machines that are made up of interacting concurrently active (but not necessarily synchronised) chunks of virtual machinery which not only interact with one another and with their physical substrates (which may be partly shared, and also frequently modified by garbage collection, metabolism, or whatever) but can also concurrently interact with and refer to various things in the immediate and remote environment (via sensory/motor channels, and possible future technologies also). I.e. virtual machinery can include mechanisms that create and manipulate semantic content, not only syntactic structures or bit patterns as digital virtual machines do. -/- Please note: Click on the title above or the link below to read the paper. I prefer to keep all my papers freely accessible on my web site so that I can correct mistakes and add improvements. -/- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html -/- This is now part of the Meta-Morphogenesis project: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html. (shrink)
Gödei's Theorem seems to me to prove that Mechanism is false, that is, that minds cannot be explained as machines. So also has it seemed to many other people: almost every mathematical logician I have put the matter to has confessed to similar thoughts, but has felt reluctant to commit himself definitely until he could see the whole argument set out, with all objections fully stated and properly met. This I attempt to do.
This article, taking a social semiotic approach, analyses two pieces of music written, shared and exalted by two pre-1945 European fascist movements – the German NSDAP and the British Union of Fascists. These movements, both political and cultural, employed mythologies of unity, common identity and purpose in order to elide the realities of social distinction and political–economic inequalities between bourgeois and proletarian groups in capitalist societies. Visually and inter-personally, the fascist cultural project communicated a machine-like certainty about a vision (...) for a new society based on discipline, conformity and the might of the nation. In this article, we are interested in the ways that these very same discourses are also communicated through sound and music in two songs: The Horst Wessel Lied and the BUF marching song, two songs that used the same melody. We analyse the discourses communicated by the semiotic choices made in melody, arrangements, sound qualities, rhythms as well as in lyrics. The article first identifies some of the underlying semiotic resources for meaning making in sound and then shows how these are used in order to communicate specific ideas, values and attitudes. (shrink)
Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: first, (...) it is unclear how any intelligent system could learn its final values, since to judge one supposedly "final" value against another seems to require a further background standard for judging. Second, it is unclear how to determine the content of a system's values based on its physical or computational structure. Finally, there is the distinctly ethical question of which values we should best aim for the system to learn. I outline a potential answer to these interrelated puzzles, centering on a "miktotelic" proposal for blending a complex, learnable final value out of many simpler ones. (shrink)
We explore the question of whether machines can infer information about our psychological traits or mental states by observing samples of our behaviour gathered from our online activities. Ongoing technical advances across a range of research communities indicate that machines are now able to access this information, but the extent to which this is possible and the consequent implications have not been well explored. We begin by highlighting the urgency of asking this question, and then explore its conceptual underpinnings, in (...) order to help emphasise the relevant issues. To answer the question, we review a large number of empirical studies, in which samples of behaviour are used to automatically infer a range of psychological constructs, including affect and emotions, aptitudes and skills, attitudes and orientations (e.g. values and sexual orientation), personality, and disorders and conditions (e.g. depression and addiction). We also present a general perspective that can bring these disparate studies together and allow us to think clearly about their philosophical and ethical implications, such as issues related to consent, privacy, and the use of persuasive technologies for controlling human behaviour. (shrink)
We argue that the concept of practical wisdom is particularly useful for organizing, understanding, and improving human-machine interactions. We consider the relationship between philosophical analysis of wisdom and psychological research into the development of wisdom. We adopt a practical orientation that suggests a conceptual engineering approach is needed, where philosophical work involves refinement of the concept in response to contributions by engineers and behavioral scientists. The former are tasked with encoding as much wise design as possible into machines themselves, (...) as well as providing sandboxes or workspaces to help various stakeholders build practical wisdom in systems that are sufficiently realistic to aid transferring skills learned to real-world use. The latter are needed for the design of exercises and methods of evaluation within these workspaces, as well as ways of empirically assessing the transfer of wisdom from workspace to world. Systematic interaction between these three disciplines (and others) is the best approach to engineering wisdom for the machine age. (shrink)
In this paper Lucas comes back to Gödelian argument against Mecanism to clarify some points. First of all, he explains his use of Gödel’s theorem instead of Turing’s theorem, showing how Gödel’ theorem, but not Turing’s theorem, raises questions concerning truth and reasoning that bear on the nature of mind and how Turing’s theorem suggests that there is something that cannot be done by any computers but not that it can be done by human minds. He considers moreover how Gödel’s (...) theorem can be interpreted as a sophisticated form of the Cretan paradox, posed by Epimenides, able to escape the viciously self-referential nature of the Cretan paradox, and how it can be used against Mechanism as a schema of disproof. Finally, Lucas suggests some answers to the most recurrent criticisms against his argument: criticisms about the implicit idealisation in the way he set up the context between mind and machine; questions concerning modality and finitude, issues of transfinite arithmetic; questions concerning the need of formalizing rational inference and some questions about consistency. (shrink)
Organisms ≠ Machines.Daniel J. Nicholson - 2013 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 44 (4):669-678.details
The machine conception of the organism (MCO) is one of the most pervasive notions in modern biology. However, it has not yet received much attention by philosophers of biology. The MCO has its origins in Cartesian natural philosophy, and it is based on the metaphorical redescription of the organism as a machine. In this paper I argue that although organisms and machines resemble each other in some basic respects, they are actually very different kinds of systems. I submit (...) that the most significant difference between organisms and machines is that the former are intrinsically purposive whereas the latter are extrinsically purposive. Using this distinction as a starting point, I discuss a wide range of dissimilarities between organisms and machines that collectively lay bare the inadequacy of the MCO as a general theory of living systems. To account for the MCO’s prevalence in biology, I distinguish between its theoretical, heuristic, and rhetorical functions. I explain why the MCO is valuable when it is employed heuristically but not theoretically, and finally I illustrate the serious problems that arise from the rhetorical appeal to the MCO. (shrink)
One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question" -- consideration of the moral status of nonhuman animals. In this book, David Gunkel takes up the "machine question": whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. The machine question poses (...) a fundamental challenge to moral thinking, questioning the traditional philosophical conceptualization of technology as a tool or instrument to be used by human agents. Gunkel begins by addressing the question of machine moral agency: whether a machine might be considered a legitimate moral agent that could be held responsible for decisions and actions. He then approaches the machine question from the other side, considering whether a machine might be a moral patient due legitimate moral consideration. Finally, Gunkel considers some recent innovations in moral philosophy and critical theory that complicate the machine question, deconstructing the binary agent--patient opposition itself. Technological advances may prompt us to wonder if the science fiction of computers and robots whose actions affect their human companions could become science fact. Gunkel's argument promises to influence future considerations of ethics, ourselves, and the other entities who inhabit this world. (shrink)
The fact that real-world decisions made by artificial intelligences are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not problems for everyone who faces a similar situation. Moreover, the force of an ethical (...) claim depends in part on the life history of the person who is making it. For both these reasons, machines could at best be engineered to provide a shallow simulacrum of ethics, which would have limited utility in confronting the ethical and policy dilemmas associated with AI. (shrink)
A number of findings in the field of machine learning have given rise to questions about what it means for automated scoring- or decisionmaking systems to be fair. One center of gravity in this discussion is whether such systems ought to satisfy classification parity (which requires parity in accuracy across groups, defined by protected attributes) or calibration (which requires similar predictions to have similar meanings across groups, defined by protected attributes). Central to this discussion are impossibility results, owed to (...) Kleinberg et al. (2016), Chouldechova (2017), and Corbett-Davies et al. (2017), which show that classification parity and calibration are often incompatible. This paper aims to argue that classification parity, calibration, and a newer, interesting measure called counterfactual fairness are unsatisfactory measures of fairness, offer a general diagnosis of the failure of these measures, and sketch an alternative approach to understanding fairness in machine learning. (shrink)
Julien Offray de La Mettrie (1709-51), author of Machine Man (1747), was the most uncompromising of the materialists of the eighteenth century, and the provocative title of his work ensured it a succès de scandale in his own time. It was however a serious, if polemical, attempt to provide an explanation of the workings of the human body and mind in purely material terms and to show that thought was the product of the workings of the brain alone. This (...) fully annotated edition presents a new English translation of the text together with the most important of La Mettrie's other philosophical works, translated into English for the first time, and Ann Thomson's introduction examines his aims and the scandalous moral consequences which he drew from his materialism. (shrink)
An analysis of how capitalism today produces subjectivity like any other “good,” and what would allow us to escape its hold. “Capital is a semiotic operator”: this assertion by Félix Guattari is at the heart of Maurizio Lazzarato's Signs and Machines, which asks us to leave behind the logocentrism that still informs so many critical theories. Lazzarato calls instead for a new theory capable of explaining how signs function in the economy, in power apparatuses, and in the production of subjectivity. (...) Moving beyond the dualism of signifier and signified, Signs and Machines shows how signs act as “sign-operators” that enter directly into material flows and into the functioning of machines. Money, the stock market, price differentials, algorithms, and scientific equations and formulas constitute semiotic “motors” that make capitalism's social and technical machines run, bypassing representation and consciousness to produce social subjections and semiotic enslavements. Lazzarato contrasts Deleuze and Guattari's complex semiotics with the political theories of Jacques Rancière, Antonio Negri and Michael Hardt, Paolo Virno, and Judith Butler, for whom language and the public space it opens still play a fundamental role. Lazzarato asks: What are the conditions necessary for political and existential rupture at a time when the production of subjectivity represents the primary and perhaps most important work of capitalism? What are the specific tools required to undo the industrial mass production of subjectivity undertaken by business and the state? What types of organization must we construct for a process of subjectivation that would allow us to escape the hold of social subjection and machinic enslavement? In addressing these questions, Signs and Machines takes on a task that is today more urgent than ever. (shrink)
This article critically examines one of the most prevalent metaphors in modern biology, namely the machine conception of the organism (MCO). Although the fundamental differences between organisms and machines make the MCO an inadequate metaphor for conceptualizing living systems, many biologists and philosophers continue to draw upon the MCO or tacitly accept it as the standard model of the organism. This paper analyses the specific difficulties that arise when the MCO is invoked in the study of development and evolution. (...) In developmental biology the MCO underlies a logically incoherent model of ontogeny, the genetic program, which serves to legitimate three problematic theses about development: genetic animism, neo-preformationism, and developmental computability. In evolutionary biology the MCO is responsible for grounding unwarranted theoretical appeals to the concept of design as well as to the interpretation of natural selection as an engineer, which promote a distorted understanding of the process and products of evolutionary change. Overall, it is argued that, despite its heuristic value, the MCO today is impeding rather than enabling further progress in our comprehension of living systems. (shrink)
Although machine learning has been successful in recent years and is increasingly being deployed in the sciences, enterprises or administrations, it has rarely been discussed in philosophy beyond the philosophy of mathematics and machine learning. The present contribution addresses the resulting lack of conceptual tools for an epistemological discussion of machine learning by conceiving of deep learning networks as ‘judging machines’ and using the Kantian analysis of judgments for specifying the type of judgment they are capable of. (...) At the center of the argument is the fact that the functionality of deep learning networks is established by training and cannot be explained and justified by reference to a predefined rule-based procedure. Instead, the computational process of a deep learning network is barely explainable and needs further justification, as is shown in reference to the current research literature. Thus, it requires a new form of justification, that is to be specified with the help of Kant’s epistemology. (shrink)
Source: "This book addresses the fundamentals of machine ethics. It discusses abilities required for ethical machine reasoning and the programming features that enable them. It connects ethics, psychological ethical processes, and machine implemented procedures. From a technical point of view, the book uses logic programming and evolutionary game theory to model and link the individual and collective moral realms. It also reports on the results of experiments performed using several model implementations. Opening specific and promising inroads into (...) the terra incognita of machine ethics, the authors define here new tools and describe a variety of program-tested moral applications and implemented systems. In addition, they provide alternative readings paths, allowing readers to best focus on their specific interests and to explore the concepts at different levels of detail. Mainly written for researchers in cognitive science, artificial intelligence, robotics, philosophy of technology and engineering of ethics, the book will also be of general interest to other academics, undergraduates in search of research topics, science journalists as well as science and society forums, legislators and military organizations concerned with machine ethics.". (shrink)
In this article, we carry out a Multimodal Critical Discourse Analysis of a sample from a larger corpus of Romanian news articles that covered the controversial camp evictions and repatriation of Romanian Roma migrants from France that began in 2010 and continue to the time of writing in 2017. These French government policies have been highly criticized both within France and by international political and aid organizations. However, the analysis shows how these brutal, anti-humanitarian events became recontextualized in the Romanian (...) Press to represent the French government’s actions as peaceful and consensual. In addition, the demonization of the Roma in the press serves as a strategy to continuously disassociate them from their Romanian counterparts. While there is a long history of discrimination against the Roma in Romania, these particular recontextualizations can be understood in the context of the Romanian government’s need to gloss over its failure to comply with the Schengen accession requirements and acquire full European Union membership. (shrink)
We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping (...) causal patterns of variable granularity and scope. We characterise the conditions under which such a game is almost surely guaranteed to converge on a optimal explanation surface in polynomial time, and highlight obstacles that will tend to prevent the players from advancing beyond certain explanatory thresholds. The game serves a descriptive and a normative function, establishing a conceptual space in which to analyse and compare existing proposals, as well as design new and improved solutions. (shrink)
John Searle distinguished between weak and strong artificial intelligence (AI). This essay discusses a third alternative, mild AI, according to which a machine may be capable of possessing a species of mentality. Using James Fetzer's conception of minds as semiotic systems, the possibility of what might be called ``mild AI'' receives consideration. Fetzer argues against strong AI by contending that digital machines lack the ground relationship required of semiotic systems. In this essay, the implementational nature of semiotic processes posited (...) by Charles S. Peirce's triadic sign relation is re-examined in terms of the underlying dispositional processes and the ontological levels they would span in an inanimate machine. This suggests that, if non-human mentality can be replicated rather than merely simulated in a digital machine, the direction to pursue appears to be that of mild AI. (shrink)
There is at present a widespread unease about the direction in which our technology is taking us, apparently against our will. Promising advances seem to carry with them unforeseen negative consequences, including damage to the environment and the reduction of work to the trivial mechanical repetition of actions which have no human meaning. However, attempts to design a better, human-centered technology--one that complements rather than rejects human skills--are all too often frustrated by the prevailing belief that "man is a (...) class='Hi'>machine," and one, moreover, that compares badly in terms of performance and durability. This contentious and stimulating book offers a new approach, one that refutes four centuries of science based on strictly causal explanations. It shows that man and nature can be viewed as "machines with a purpose," and that the "purpose" can be the advancement of technology to the benefit and not the detriment of the human race and its environment. This fascinating work is accessible to a wide range of readers, scientists and nonspecialists alike. It will interest anyone concerned about the impact of technology and the way it is shaping our world. (shrink)
In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...) are moral patients only if they have non-psychological interests. I then provide an account of what I call teleo interests that constitute the most plausible type of non-psychological interest that a being might have. I then argue that even if current machines have teleo interests, they are such that agents need not concern themselves with these interests. Therefore, for all intents and purposes, current machines are not moral patients. (shrink)
A key distinction in ethics is between members and nonmembers of the moral community. Over time, our notion of this community has expanded as we have moved from a rationality criterion to a sentience criterion for membership. I argue that a sentience criterion is insufficient to accommodate all members of the moral community; the true underlying criterion can be understood in terms of whether a being has interests. This may be extended to conscious, self-aware machines, as well as to any (...) autonomous intelligent machines. Such machines exhibit an ability to formulate desires for the course of their own existence; this gives them basic moral standing. While not all machines display autonomy, those which do must be treated as moral patients; to ignore their claims to moral recognition is to repeat past errors. I thus urge moral generosity with respect to the ethical claims of intelligent machines. (shrink)
Just as humans can draw conclusions responsibly or irresponsibly, so too can computers. Machine learning systems that have been trained on data sets that include irresponsible judgments are likely to yield irresponsible predictions as outputs. In this paper I focus on a particular kind of inference a computer system might make: identification of the intentions with which a person acted on the basis of photographic evidence. Such inferences are liable to be morally objectionable, because of a way in which (...) they are presumptuous. After elaborating this moral concern, I explore the possibility that carefully procuring the training data for image recognition systems could ensure that the systems avoid the problem. The lesson of this paper extends beyond just the particular case of image recognition systems and the challenge of responsibly identifying a person’s intentions. Reflection on this particular case demonstrates the importance (as well as the difficulty) of evaluating machine learning systems and their training data from the standpoint of moral considerations that are not encompassed by ordinary assessments of predictive accuracy. (shrink)
The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
In this collection of essays we hear from an international array of computer and brain scientists who are actively working from both the machine and human ends...
The Internet is an important focus of attention for the philosophy of mind and cognitive science communities. This is partly because the Internet serves as an important part of the material environment in which a broad array of human cognitive and epistemic activities are situated. The Internet can thus be seen as an important part of the ‘cognitive ecology’ that helps to shape, support and realize aspects of human cognizing. Much of the previous philosophical work in this area has sought (...) to analyze the cognitive significance of the Internet from the perspective of human cognition. There has, as such, been little effort to assess the cognitive significance of the Internet from the perspective of ‘machine cognition’. This is unfortunate, because the Internet is likely to exert a significant influence on the shape of machine intelligence. The present paper attempts to evaluate the extent to which the Internet serves as a form of cognitive ecology for synthetic forms of intelligence. In particular, the phenomenon of Internet-situated machine intelligence is analyzed from the perspective of a number of approaches that are typically subsumed under the heading of situated cognition. These include extended, embedded, scaffolded and embodied approaches to cognition. For each of these approaches, the Internet is shown to be of potential relevance to the development and operation of machine-based cognitive capabilities. Such insights help us to appreciate the role of the Internet in advancing the current state-of-the-art in machine intelligence. (shrink)
Introduction. During the past two decades philosophers of psychology have considered a large variety of computational models for philosophy of mind and more recently for cognitive science. Among the suggested models are computer programs, Turing machines, pushdown automata, linear bounded automata, finite state automata and sequential machines. Many philosophers have found finite state automata models to be the most appealing, for various reasons, although there has been no shortage of defenders of programs and Turing machines. A paper by Arthur Burks (...) convinced me long ago that “all natural human functions” are, or can be fruitfully modeled to be, finite state automata with output. Further work in the field has reinforced this conviction. There is room, however, for the use of any of the above models in philosophy of mind and in the ongoing development of cognitive science. (shrink)
Analogies to machines are commonplace in the life sciences, especially in cellular and molecular biology — they shape conceptions of phenomena and expectations about how they are to be explained. This paper offers a framework for thinking about such analogies. The guiding idea is that machine-like systems are especially amenable to decompositional explanation, i.e., to analyses that tease apart underlying components and attend to their structural features and interrelations. I argue that for decomposition to succeed a system must exhibit (...) causal orderliness, which I explicate in terms of differentiation among parts and the significance of local relations. I also discuss what makes a model depict its target as machine-like, suggesting that a key issue is the degree of detail with respect to the target’s parts and their interrelations. (shrink)
The machine as a social movement of today's “precariat”—those whose labor and lives are precarious. In this “concise philosophy of the machine,” Gerald Raunig provides a historical and critical backdrop to a concept proposed forty years ago by the French philosophers Félix Guattari and Gilles Deleuze: the machine, not as a technical device and apparatus, but as a social composition and concatenation. This conception of the machine as an arrangement of technical, bodily, intellectual, and social components (...) subverts the opposition between man and machine, organism and mechanism, individual and community. Drawing from an unusual range of films, literature, and performance—from the role of bicycles in Flann O'Brien's fiction to Vittorio de Sica's Neorealist film The Bicycle Thieves, and from Karl Marx's “Fragment on Machines” to the deus ex machina of Greek drama—Raunig arrives at an enhanced conception of the machine as a social movement, finding its most apt and concrete manifestation in the Euromayday movement, which since 2001 has become a transnational activist and discursive practice focused upon the precarious nature of labor and lives. (shrink)
I want to examine the Habermasian account of modernity from a particular vantage-point: namely the collection of new technologies that are called variously “artificial intelligence,” “knowledge engineering,” “intelligent systems,” “expert systems,” and the like. The significance of these technologies far exceeds whatever role they may come to play in business, government, or the other institutions they penetrate. I suggest that they cast doubt over the entire project of modernity as understood by Habermas — not because they signify the penetration of (...) the lifeworld by instrumental rationality, but because they demonstrate the extent to which the formation of the lifeworld itself has simply moved beyond the problematic of autonomy and communicative rationality elaborated by Habermas. (shrink)
Brain machine interface (BMI) technology makes direct communication between the brain and a machine possible by means of electrodes. This paper reviews the existing and emerging technologies in this field and offers a systematic inquiry into the relevant ethical problems that are likely to emerge in the following decades.
IntroductionThe assessments of the motor symptoms in Parkinson’s disease are usually limited to clinical rating scales, and it depends on the clinician’s experience. This study aims to propose a machine learning technique algorithm using the variables from upper and lower limbs, to classify people with PD from healthy people, using data from a portable low-cost device. And can be used to support the diagnosis and follow-up of patients in developing countries and remote areas.MethodsWe used Kinect®eMotion system to capture the (...) spatiotemporal gait data from 30 patients with PD and 30 healthy age-matched controls in three walking trials. First, a correlation matrix was made using the variables of upper and lower limbs. After this, we applied a backward feature selection model using R and Python to determine the most relevant variables. Three further analyses were done using variables selected from backward feature selection model, movement disorders specialist, and all the variables from the dataset. We ran seven machine learning models for each model. Dataset was divided 80% for algorithm training and 20% for evaluation. Finally, a causal inference model using the DoWhy library was performed on Dataset B due to its accuracy and simplicity.ResultsThe Random Forest model is the most accurate for all three variable Datasets followed by the support vector machine. The CIM shows a relation between leg variables and the arms swing asymmetry and a proportional relationship between ASA and the diagnosis of PD with a robust estimator.ConclusionsMachine learning techniques based on objective measures using portable low-cost devices are useful and accurate to classify patients with Parkinson’s disease. This method can be used to evaluate patients remotely and help clinicians make decisions regarding follow-up and treatment. (shrink)
Simple idealized models seem to provide more understanding than opaque, complex, and hyper-realistic models. However, an increasing number of scientists are going in the opposite direction by utilizing opaque machine learning models to make predictions and draw inferences, suggesting that scientists are opting for models that have less potential for understanding. Are scientists trading understanding for some other epistemic or pragmatic good when they choose a machine learning model? Or are the assumptions behind why minimal models provide understanding (...) misguided? In this paper, using the case of deep neural networks, I argue that it is not the complexity or black box nature of a model that limits how much understanding the model provides. Instead, it is a lack of scientific and empirical evidence supporting the link that connects a model to the target phenomenon that primarily prohibits understanding. (shrink)
When courts started publishing judgements, big data analysis within the legal domain became possible. By taking data from the European Court of Human Rights as an example, we investigate how natural language processing tools can be used to analyse texts of the court proceedings in order to automatically predict judicial decisions. With an average accuracy of 75% in predicting the violation of 9 articles of the European Convention on Human Rights our approach highlights the potential of machine learning approaches (...) in the legal domain. We show, however, that predicting decisions for future cases based on the cases from the past negatively impacts performance. Furthermore, we demonstrate that we can achieve a relatively high classification performance when predicting outcomes based only on the surnames of the judges that try the case. (shrink)
Informed consent is at the core of the clinical relationship. With the introduction of machine learning in healthcare, the role of informed consent is challenged. This paper addresses the issue of whether patients must be informed about medical ML applications and asked for consent. It aims to expose the discrepancy between ethical and practical considerations, while arguing that this polarization is a false dichotomy: in reality, ethics is applied to specific contexts and situations. Bridging this gap and considering the (...) whole picture is essential for advancing the debate. In the light of the possible future developments of the situation and the technologies, as well as the benefits that informed consent for ML can bring to shared decision-making, the present analysis concludes that it is necessary to prepare the ground for a possible future requirement of informed consent for medical ML. (shrink)
While Nozick and his sympathizers assume there is a widespread anti-hedonist intuition to prefer reality to an experience machine, hedonists have marshalled empirical evidence that shows such an assumption to be unfounded. Results of several experience machine variants indicate there is no widespread anti-hedonist intuition. From these findings, hedonists claim Nozick's argument fails as an objection to hedonism. This article suggests the argument surrounding experience machines has been misconceived. Rather than eliciting intuitions about what is prudentially valuable, these (...) intuitive judgements are instead calculations about prudential pay-offs and trade-offs. This position can help explain the divergence of intuitions people have about experience machines. (shrink)
It has become common to find diagrams and flow-charts used in our organizations to illustrate the nature of processes, what is involved and how it happens, or to show how parts of the organization interrelate to each other and work together. Such diagrams are used as they are thought to help visualization and simplify things in order to represent the essence of a particular situation, the core features. In this paper, using a social semiotic approach, we show that we need (...) to develop a much more critical sense of how these diagrams and flowcharts can easily abstract, conceal and substitute actual causalities, work roles and relationships. We demonstrate this using the example of a series of interrelated flows-charts used to implement a new system of target-based learning in preschool/kindergartens in Sweden – a system which works highly in favor of a rapidly privatizing education sector. Here, the flow charts shape how school processes and learning are presented to devalue the former system and value the new. (shrink)
Genes are often described by biologists using metaphors derived from computa- tional science: they are thought of as carriers of information, as being the equivalent of ‘‘blueprints’’ for the construction of organisms. Likewise, cells are often characterized as ‘‘factories’’ and organisms themselves become analogous to machines. Accordingly, when the human genome project was initially announced, the promise was that we would soon know how a human being is made, just as we know how to make airplanes and buildings. Impor- tantly, (...) modern proponents of Intelligent Design, the latest version of creationism, have exploited biologists’ use of the language of information and blueprints to make their spurious case, based on pseudoscientific concepts such as ‘‘irreducible complexity’’ and on flawed analogies between living cells and mechanical factories. However, the living organ- ism = machine analogy was criticized already by David Hume in his Dialogues Concerning Natural Religion. In line with Hume’s criticism, over the past several years a more nuanced and accurate understanding of what genes are and how they operate has emerged, ironically in part from the work of computational scientists who take biology, and in particular developmental biology, more seriously than some biologists seem to do. In this article we connect Hume’s original criticism of the living organism = machine analogy with the modern ID movement, and illustrate how the use of misleading and outdated metaphors in science can play into the hands of pseudoscientists. Thus, we argue that dropping the blueprint and similar metaphors will improve both the science of biology and its understanding by the general public. (shrink)