Do the dynamics of a physical system determine what function the system computes? Except in special cases, the answer is no: it is often indeterminate what function a given physical system computes. Accordingly, care should be taken when the question ‘What does a particular neuronal system do?’ is answered by hypothesising that the system computes a particular function. The phenomenon of the indeterminacy of computation has important implications for the development of computational explanations of biological systems. Additionally, the phenomenon lends (...) some support to the idea that a single neuronal structure may perform multiple cognitive functions, each subserved by a different computation. We provide an overarching conceptual framework in order to further the philosophical debate on the nature of computational indeterminacy and computational explanation. (shrink)
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question (...) is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility. (shrink)
In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...) difference between ethical decisions and general decisions is that ethical decisions must be part of the process of developing ethical expertise within an agent. We use this distinction in examining publicity surrounding a particular experiment in which a simulated robot attempted to safeguard simulated humans from falling into a hole. We conclude that any suggestions that this simulated robot was making ethical decisions were misleading. (shrink)
As software developers design artificial agents , they often have to wrestle with complex issues, issues that have philosophical and ethical importance. This paper addresses two key questions at the intersection of philosophy and technology: What is deception? And when is it permissible for the developer of a computer artifact to be deceptive in the artifact’s development? While exploring these questions from the perspective of a software developer, we examine the relationship of deception and trust. Are developers using deception to (...) gain our trust? Is trust generated through technological “enchantment” warranted? Next, we investigate more complex questions of how deception that involves AAs differs from deception that only involves humans. Finally, we analyze the role and responsibility of developers in trust situations that involve both humans and AAs. (shrink)
What is nontrivial digital computation? It is the processing of discrete data through discrete state transitions in accordance with finite instructional information. The motivation for our account is that many previous attempts to answer this question are inadequate, and also that this account accords with the common intuition that digital computation is a type of information processing. We use the notion of reachability in a graph to defend this characterization in memory-based systems and underscore the importance of instructional information for (...) digital computation. We argue that our account evaluates positively against adequacy criteria for accounts of computation. (shrink)
Purpose This paper aims to explore the ethical and social impact of augmented visual field devices, identifying issues that AVFDs share with existing devices and suggesting new ethical and social issues that arise with the adoption of AVFDs. Design/methodology/approach This essay incorporates both a philosophical and an ethical analysis approach. It is based on Plato’s Allegory of the Cave, philosophical notions of transparency and presence and human values including psychological well-being, physical well-being, privacy, deception, informed consent, ownership and property and (...) trust. Findings The paper concludes that the interactions among developers, users and non-users via AVFDs have implications for autonomy. It also identifies issues of ownership that arise because of the blending of physical and virtual space and important ways that these devices impact, identity and trust. Practical implications Developers ought to take time to design and implement an easy-to-use informed consent system with these devices. There is a strong need for consent protocols among developers, users and non-users of AVFDs. Social implications There is a social benefit to users sharing what is visible on their devices with those who are in close physical proximity, but this introduces tension between notions of personal privacy and the establishment and maintenance of social norms. Originality/value There is new analysis of how AVFDs impact individual identity and the attendant ties to notions of ownership of the space between an object and someone’s eyes and control over perception. (shrink)
We describe the process of changing and the changes being suggested for the ACM Code of Ethics and Professional Conduct. In addition to addressing the technical and ethical basis for the proposed changes, we identify suggestions that commenters made in response to the first draft. We invite feedback on the proposed changes and on the suggestions that commenters made.
This paper analyzes certain technical details of Floridi’s Theory of Strongly Semantic Information. It provides a clarification regarding desirable properties of degrees of informativeness functions by rejecting three of Floridi’s original constraints and proposing a replacement constraint. Finally, the paper briefly explores the notion of quantities of inaccuracy and shows an analysis that mimics Floridi’s analysis of quantities of vacuity.
This commentary on Fresco's article "Information processing as an account of concrete digital computation" illuminates the two intertwined roles that the definition of the term "information" plays in Fresco's analysis. It provides analysis of the notion of actualizing control in information processing. The key point made is that not all control information in common computational devices cannot be processed.
The Association for Computing Machinery's Committee on Professional Ethics has been charged to execute three major projects over the next two years: updating ACM's Code of Ethics and Professional Conduct, revising the enforcement procedures for the Code, and developing new media to promote integrity in the profession. We cannot do this alone, and we are asking SIGCAS members to volunteer and get involved. We will briefly describe the rationale and plan behind these projects and describe opportunities to get involved.
“Congealing” is a word that evokes senses of unpleasantness where perhaps something inviting had once been. It also implies that things are becoming less fluid and more rigid. As we began organizing ETHICOMP 2018, we wanted a theme that reflected the impact of technologies on human cultures, practices and lives. Our initial draft of the theme was “Creating, Changing, and Congealing Ways of Life with Technologies.” And while we were eventually persuaded to use a more congenial way of putting the (...) idea (it became “Creating, Changing, and Coalescing Ways of Life with Technologies”), in some ways, it remains true for us that “congealing,” and its connotations of something less pleasant, gets at the original idea. As we incorporate technologies into our practices, much attention is paid to how they change our ways of doing things. But technologies can also help ways of life set-up and harden like yesterday’s leftovers – not appealing, yet difficult to budge and sometimes quite unhealthy. Once a particular process is built around a piece of technology, it can become entrenched and increasingly difficult to change. For a historical example, consider how difficult it was to adapt records and software on the eve of the 21st century in response to the so-called “Y2K” problem. In early software development, efficient use of memory was an important design consideration. It had become a standard practice to use a 2-digit field for the year, which would roll to from “99” to “00” in the year 2000, causing problems for datedependent functions. Technologies can also reflect and reinforce existing cultural tendencies. For a recent example, consider the human resources software created by Amazon that used its existing hiring data to train a machine-learning system to rate applicants. The resulting system turned out to be biased against women applicants, downranking resumes that included the word “woman” or “women’s.” Amazon ended up scrapping the project altogether. Because of these kinds of effects, we wanted to encourage people to think beyond well-worn paradigms like “technologies are disruptive” to consider other kinds of effects they can have. As it happened, the imagery of “congealing” proved distasteful enough that the steering committee opted for a more neutral term. But even with the less-dramatic wording the conference ended up attracting a rich and diverse range of papers that examined technological issues from a variety of angles, exactly as we had hoped. (shrink)
We demonstrate that different categories of software raise different ethical concerns with respect to whether software ought to be Free Software or Proprietary Software. We outline the ethical tension between Free Software and Proprietary Software that stems from the two kinds of licenses. For some categories of software we develop support for normative statements regarding the software development landscape. We claim that as society's use of software changes, the ethical analysis for that category of software must necessarily be repeated. Finally, (...) we make a utilitarian argument that the software development environment should encourage both Free Software and Proprietary Software to flourish. (shrink)