The use of the computer metaphor has led to the proposal of mind architecture (Pylyshyn 1984; Newell 1990) as a model of the organization of the mind. The dualist computational model, however, has, since the earliest days of psychological functionalism, required that the concepts mind architecture and brain architecture be remote from each other. The development of both connectionism and neurocomputational science, has sought to dispense with this dualism and provide general models of consciousness – a uniform cognitive architecture –, (...) which is in general reductionist, but which retains the computer metaphor. This paper examines, in the first place, the concepts of mind architecture and brain architecture, in order to evaluate the syntheses which have recently been offered. It then moves on to show how modifications which have been made to classical functionalist mind architectures, with the aim of making them compatible with brain architectures, are unable to resolve some of the most serious problems of functionalism. Some suggestions are given as to why it is not possible to relate mind structures and brain structures by using neurocomputational approaches, and finally the question is raised of the validity of reductionism in a theory which sets out to unite mind and brain architectures. (shrink)
In this paper we continue to explore the ethics and social impact of augmented visual field devices. Recently, Microsoft announced the pending release of HoloLens, and Magic Leap filed a patent application for technology that will project light directly onto the wearer's retina. Here we explore the notion of deception in relation to the impact these devices have on developers, users, and non-users as they interact via these devices. These sorts of interactions raise questions regarding autonomy and suggest a strong (...) need for informed consent protocols. We identify issues of ownership that arise due to the blending of physical and virtual space and important ways that these devices impact trust. Finally, we explore how these devices impact individual identity and thus raise the question of ownership of the space between an object and someone's eyes. We conclude that developers ought to take time to design and implement a natural and easy to use informed consent system with these devices. (shrink)
Purpose This paper aims to explore the ethical and social impact of augmented visual field devices, identifying issues that AVFDs share with existing devices and suggesting new ethical and social issues that arise with the adoption of AVFDs. Design/methodology/approach This essay incorporates both a philosophical and an ethical analysis approach. It is based on Plato’s Allegory of the Cave, philosophical notions of transparency and presence and human values including psychological well-being, physical well-being, privacy, deception, informed consent, ownership and property and (...) trust. Findings The paper concludes that the interactions among developers, users and non-users via AVFDs have implications for autonomy. It also identifies issues of ownership that arise because of the blending of physical and virtual space and important ways that these devices impact, identity and trust. Practical implications Developers ought to take time to design and implement an easy-to-use informed consent system with these devices. There is a strong need for consent protocols among developers, users and non-users of AVFDs. Social implications There is a social benefit to users sharing what is visible on their devices with those who are in close physical proximity, but this introduces tension between notions of personal privacy and the establishment and maintenance of social norms. Originality/value There is new analysis of how AVFDs impact individual identity and the attendant ties to notions of ownership of the space between an object and someone’s eyes and control over perception. (shrink)
In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...) difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading. (shrink)
This paper analyzes certain technical details of Floridi’s Theory of Strongly Semantic Information. It provides a clarification regarding desirable properties of degrees of informativeness functions by rejecting three of Floridi’s original constraints and proposing a replacement constraint. Finally, the paper briefly explores the notion of quantities of inaccuracy and shows an analysis that mimics Floridi’s analysis of quantities of vacuity.
In April 2004 the Parliamentary Assembly of the Council of Europe debated a report from its Social, Health and Family Affairs Committee , which questioned the Council of Europe’s opposition to legalising euthanasia. This article exposes the Report’s flaws, not least its superficiality and selectivity.
We demonstrate that different categories of software raise different ethical concerns with respect to whether software ought to be Free Software or Proprietary Software. We outline the ethical tension between Free Software and Proprietary Software that stems from the two kinds of licenses. For some categories of software we develop support for normative statements regarding the software development landscape. We claim that as society's use of software changes, the ethical analysis for that category of software must necessarily be repeated. Finally, (...) we make a utilitarian argument that the software development environment should encourage both Free Software and Proprietary Software to flourish. (shrink)
This commentary on Fresco's article "Information processing as an account of concrete digital computation" illuminates the two intertwined roles that the definition of the term "information" plays in Fresco's analysis. It provides analysis of the notion of actualizing control in information processing. The key point made is that not all control information in common computational devices cannot be processed.
In search of human uniqueness Content Type Journal Article DOI 10.1007/s11016-010-9472-6 Authors Elsa Addessi, Istituto di Scienze e Tecnologie della Cognizione, Via Ulisse Aldrovandi, 16/b, 00197 Rome, Italy Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
What is nontrivial digital computation? It is the processing of discrete data through discrete state transitions in accordance with finite instructional information. The motivation for our account is that many previous attempts to answer this question are inadequate, and also that this account accords with the common intuition that digital computation is a type of information processing. We use the notion of reachability in a graph to defend this characterization in memory-based systems and underscore the importance of instructional information for (...) digital computation. We argue that our account evaluates positively against adequacy criteria for accounts of computation. (shrink)
In this paper we introduce a game semantics for System P, one of the most studied axiomatic systems for non-monotonic reasoning, conditional logic and belief revision. We prove soundness and completeness of the game semantics with respect to the rules of System P, and show that an inference is valid with respect to the game semantics if and only if it is valid with respect to the standard order semantics of System P. Combining these two results leads to a new (...) completeness proof for System P with respect to its order semantics. Our approach allows us to construct for every inference either a concrete proof of the inference from the rules in System P or a countermodel in the order semantics. Our results rely on the notion of a witnessing set for an inference, whose existence is a concise, necessary and sufficient condition for validity of an inferences in System P. We also introduce an infinitary variant of System P and use the game semantics to show its completeness for the restricted class of well-founded orders. (shrink)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...) is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility. (shrink)
This paper describes the Electronic Schoolbag, a digital workspace developed at the University of Savoie (France) and analyses its usages. This online environment is dedicated to the educational world: it offers pupils, students, teachers, school staff, or parents, personal and group workspaces in which individual or collaborative activities can take place. The flexibility of this software, allowing synchronous or asynchronous activities, lies in the “participation model”. This model allows groups themselves to describe and organise their activities. The architecture that permits (...) its implementation in the Electronic Schoolbag workspace is described. The study of the practices of the workspace is then presented. This requires different observation methods, according to the different procedures chosen: real practices provided by quantitative methods (analysis of the logs of the actions and questionnaires) and imagined practices provided by qualitative methods (semi-directive interviews). The results obtained from the university users allow us to assess the evolution of the usages for different periods and on different university sites. The observatory also lets us list the main uses of the Electronic Schoolbag for educative communication (collaborative vs. individual, informative vs. communicative). (shrink)
Do the dynamics of a physical system determine what function the system computes? Except in special cases, the answer is no: it is often indeterminate what function a given physical system computes. Accordingly, care should be taken when the question ‘What does a particular neuronal system do?’ is answered by hypothesising that the system computes a particular function. The phenomenon of the indeterminacy of computation has important implications for the development of computational explanations of biological systems. Additionally, the phenomenon lends (...) some support to the idea that a single neuronal structure may perform multiple cognitive functions, each subserved by a different computation. We provide an overarching conceptual framework in order to further the philosophical debate on the nature of computational indeterminacy and computational explanation. (shrink)
As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...) gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs. (shrink)
Earthcare: Readings and Cases in Environmental Ethics presents a diverse collection of writings from a variety of authors on environmental ethics, environmental science, and the environmental movement overall. Exploring a broad range of world views, religions and philosophies, David W. Clowney and Patricia Mosto bring together insightful thoughts on the ethical issues arising in various areas of environmental concern.
Background and objective: Assuming the hypothesis that the general practitioner can and should be a key player in making end-of-life decisions for hospitalised patients, perceptions of GPs’ role assigned to them by hospital doctors in making withdrawal decisions for such patients were surveyed.Design: Questionnaire survey.Setting: Urban and rural areas.Participants: GPs.Results: The response rate was 32.2% , and it was observed that 70.8% of respondents believed that their participation in withdrawal decisions for their hospitalised patients was essential, whereas 42.1% believed that (...) the hospital doctors were sufficiently skilled to make withdrawal decisions without input from the GPs. Most respondents were found to believe that they had the necessary skills and enough time to participate in withdrawal decisions. The last case of treatment withdrawal in hospital for one of their patients was described by 40% of respondents, of whom only 40.0% believed that they had participated actively in the decision process. The major factors in the multivariate analysis were the GP’s strong belief that his or her participation was essential , information on admission of the patient given to the GP by the hospital department , rural practice , visit to the patient dying in hospital and a request by the family to be kept informed about the patient .Conclusion: Strong interest was evinced among GPs regarding end-of-life issues, as well as considerable experience of patients dying at home. As GPs are more closely corrected to patients’ families, they may be a good choice for third-party intervention in making end-of-life decisions for hospitalised patients. (shrink)