After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...) computer systems have intentionality, and because of this, they should not be dismissed from the realm of morality in the same way that natural objects are dismissed. Natural objects behave from necessity; computer systems and other artifacts behave from necessity after they are created and deployed, but, unlike natural objects, they are intentionally created and deployed. Failure to recognize the intentionality of computer systems and their connection to human intentionality and action hides the moral character of computer systems. Computer systems are components in human moral action. When humans act with artifacts, their actions are constituted by the intentionality and efficacy of the artifact which, in turn, has been constituted by the intentionality and efficacy of the artifact designer. All three components – artifact designer, artifact, and artifact user – are at work when there is an action and all three should be the focus of moral evaluation. (shrink)
Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting (...) reactions, is not predetermined. The animal–robot analogy is one of the most commonly used in attempting to frame interactions between humans and robots and it also tends to push in the direction of blurring the distinction between humans and machines. We argue that, despite some shared characteristics, when it comes to thinking about the moral status of humanoid robots, legal liability, and the impact of treatment of humanoid robots on how humans treat one another, analogies with animals are misleading. (shrink)
A critically important ethical issue facing the AI research community is how AI research and AI products can be responsibly conceptualised and presented to the public. A good deal of fear and concern about uncontrollable AI is now being displayed in public discourse. Public understanding of AI is being shaped in a way that may ultimately impede AI research. The public discourse as well as discourse among AI researchers leads to at least two problems: a confusion about the notion of (...) ‘autonomy’ that induces people to attribute to machines something comparable to human autonomy, and a ‘sociotechnical blindness’ that hides the essential role played by humans at every stage of the design and deployment of an AI system. Here our purpose is to develop and use a language with the aim to reframe the discourse in AI and shed light on the real issues in the discipline. (shrink)
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...) adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them. (shrink)
The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Confusion about agency is at the root of misconceptions about the possibilities for future AI. We introduce the concept of a triadic agency that includes the causal agency of artifacts and the intentional agency of humans to better describe what happens in AI as (...) it functions in real-world contexts. We use the VW emission fraud case to explain triadic agency since in this case a technological artifact, namely software, was an essential part of the wrongdoing and the software might be said to have agency in the wrongdoing. We then extend the case to include futuristic AI, imagining AI that becomes more and more autonomous. (shrink)
After reviewing portions of the 21st Century Nanotechnology Research and Development Act that call for examination of societal and ethical issues, this essay seeks to understand how nanoethics can play a role in nanotechnology development. What can and should nanoethics aim to achieve? The focus of the essay is on the challenges of examining ethical issues with regard to a technology that is still emerging, still ‘in the making.’ The literature of science and technology studies (STS) is used to understand (...) the nanotechnology endeavor in a way that makes room for influence by nanoethics. The analysis emphasizes: the contingency of technology and the many actors involved in its development; a conception of technology as sociotechnical systems; and, the values infused (in a variety of ways) in technology. Nanoethicists can be among the many actors who shape the meaning and materiality of an emerging technology. Nevertheless, there are dangers that nanoethicists should try to avoid. The possibility of being co-opted from working along side nanotechnology engineers and scientists is one danger that is inseparable from trying to influence. Related but somewhat different is the danger of not asking about the worthiness of the nanotechnology enterprise as a social investment in the future. (shrink)
In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided by Hart and Hart (...) and Honoré. While these accounts are helpful, they have misled philosophers and others by presupposing that responsibility can be ascribed in all cases of action simply by paying attention to the free and intended actions of human beings. Such accounts neglect the part played by technology in ascriptions of responsibility in cases of moral action with technology. For both moral and role responsibility, we argue that ascriptions of both causal and role responsibility depend on seeing action as complex in the sense described by TMA. We conclude by showing how our analysis enriches moral discourse about responsibility for TMA. (shrink)
Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...) When the black box is opened up and we see how autonomy is understood and ‘made’ by those involved in the design and development of robots, the responsibility questions change significantly. (shrink)
Since the idea of forbidden knowledge is rooted in the biblical story of Adam and Eve eating from the forbidden tree of knowledge, its meaning today, in particular as a metaphor for scientific knowledge, is not so obvious. We can and should ask questions about the autonomy of science.
In this paper I use the concept of forbidden knowledge to explore questions about putting limits on science. Science has generally been understood to seek and produce objective truth, and this understanding of science has grounded its claim to freedom of inquiry. What happens to decision making about science when this claim to objective, disinterested truth is rejected? There are two changes that must be made to update the idea of forbidden knowledge for modern science. The first is to shift (...) from presuming that decisions to constrain or even forbid knowledge can be made from a position of omniscience (perfect knowledge) to recognizing that such decisions made by human beings are made from a position of limited or partial knowledge. The second is to reject the idea that knowledge is objective and disinterested and accept that knowledge (even scientific knowledge) is interested. In particular, choices about what knowledge gets created are normative, value choices. When these two changes are made to the idea of forbidden knowledge, questions about limiting or forbidding lines of inquiry are shown to distract attention from the more important matters of who makes and how decisions are made about what knowledge is produced. Much more attention should be focused on choosing directions in science, and as this is done, the matter of whether constraints should be placed on science will fall into place. (shrink)
The following views were presented at the Annual Meeting of the American Association for the Advancement of Science Seminar “Teaching Ethics in Science and Engineering”, 10–11 February 1993 organized by Stephanie J. Bird , Penny J. Gilmer and Terrell W. Bynum . Opragen Publications thanks the AAAS, seminar organizers and authors for permission to publish extracts from the conference. The opinions expressed are those of the authors and do not reflect the opinions of AAAS or its Board of Directors.
_An engaging, accessible survey of the ethical issues faced by engineers, designed for students_ The first engineering ethics textbook to use debates as the framework for presenting engineering ethics topics, this engaging, accessible survey explores the most difficult and controversial issues that engineers face in daily practice. Written by a leading scholar in the field of engineering and computer ethics, Deborah Johnson approaches engineering ethics with three premises: that engineering is both a technical and a social endeavor; that engineers don’t (...) just build things, they build society; and that engineering is an inherently ethical enterprise. (shrink)
The first topic of concern is anonymity, specifically the anonymity that is available in communications on the Internet. An earlier paper argues that anonymity in electronic communication is problematic because: it makes law enforcement difficult ; it frees individuals to behave in socially undesirable and harmful ways ; it diminishes the integrity of information since one can't be sure who information is coming from, whether it has been altered on the way, etc.; and all three of the above contribute to (...) an environment of diminished trust which is not conducive to certain uses of computer communication. Counterbalancing these problems are some important benefits. Anonymity can facilitate some socially desirable and beneficial behavior. For example, it can eliminate the fear of repercussions for behavior in contexts in which repercussions would diminish the availability or reliability of information, e.g., voting, personal relationships between consenting adults, and the like. Furthermore, anonymity can be used constructively to reduce the effect of prejudices on communications. Negative aspects of anonymity all seem to point to a tension between accountability and anonymity. They suggest that accountability and anonymity are not compatible, and they even seem to suggest that since accountability is a good thing, it would be good to eliminate anonymity. In other words, the problems with anonymity suggest that individuals are more likely to behave in socially desirable ways when they are held accountable for their behavior, and they are more likely to engage in socially undesirable behavior when they are not held accountable. I am not going to take issue with the correlation between accountability and anonymity, but rather with the claim that accountability is good. To examine this problem, let's look at a continuum that stretches from total anonymity at one end, and no anonymity at all at the other end. At the opposite extreme of anonymity is a panopticon society. The panopticon is the prison environment described by Foucault in which prison cells are arranged in a large circle with the side facing the inside of the circle open to view. The guard tower is placed in the middle of the circle so that guards can see everything that goes on in every cell. In a recent article on privacy, Jeffrey Reiman, reflecting on the new intelligent highway systems, suggests that we are moving closer and closer to a panopticon society. When we contemplate all the electronic data that is now gathered about each one of us as we move through our everyday lives- intelligent highway systems, consumer transactions, traffic patterns on the internet, medical records, financial records, and so on- we see the trend that Reiman identifies. Electronic behavior is recorded and the information is retained. While actions/transactions in separate domains are not necessarily combined, it seems obvious that the potential exists for combining data into a complete portfolio of an individual's day to day life. So it would seem that as more and more activities and domains are moved into a IT-based medium, the closer we will come to a panopticon society. A panopticon society gives us the ultimate in accountability. Everything an individual does is observable and therefore available to those to whom we are accountable. Of course, in doing this, it puts us, in effect, in prison. The prison parallel is appropriate here because what anonymity allows us is freedom; prison is the ultimate in lack of freedom. In this way the arguments for a free society become arguments for anonymity. Only when individuals are free will they experiment, try new ideas, take risks, and learn by doing so. Only in an environment that tolerates making mistakes will individuals develop the active habits that are so essential for democracy. In a world without information technology, individuals have levels or degrees and various kinds of anonymity and consequently different levels and kinds of freedom. Degrees and kinds of anonymity vary with the domain: small town social life versus urban social life, voting, commercial exchanges, banking, automobile travel, airplane travel, telephone communication, education, and so on. Drawing from our experience before IT-based institutions, we might believe that what we need is varying levels or degrees and kinds of anonymity. This seems a good starting place because it suggests an attempt to re-create the mixture that we have in the physical, non-IT-based world. Nevertheless, there is a danger. If we think in terms of levels and degrees of anonymity, we may not see the forest from the trees. We may not acknowledge that in an electronic medium, levels and kinds of anonymity mean, in an important sense, no anonymity. If there are domains in which we can be anonymous but those domains are part of a global communication infrastructure in which there is no anonymity at the entry point, then it will always be possible to trace someone's identity. We delude ourselves when we think we have anonymity on-line or off-line. Rather, what we have both places is situations in which it is more and less difficult to identify individuals. We have a continuum of situations in which it is easy and difficult to link behavior to other behavior and histories of behaviors. In the physical world, we can go places and do things where others don't know us by name and have no history with us, though they see our bodies, clothes, and behavior. If we do nothing unusual, we may be forgotten. On the other hand, if we do something illegal, authorities may attempt to track us down and figure out who we are. For example, law enforcement officials, collection agencies, those who want to sue us may take an active interest in removing our anonymity, ex post facto. Think of Timothy McVeigh and Terry Nichols-the men who apparently bombed the federal building in Oklahoma City. Much of what they did, they did anonymously, but then law enforcement officials set out to find out who had done various things, e.g., rented a car, bought explosives, etc. The shrouds of anonymity under which McVeigh and Nichols had acted were slowly removed. Is this any different than behavior on the internet? Is there a significant difference in the kind or degree of anonymity we have in the physical world versus what we have in an IT-based world? The character of the trail we leave is different; in the one case, its an electronic trail; while in the other it involves human memories, photographs, and paper and ink. What law enforcement officials had to do to track down McVeigh is quite different from tracking down an electronic law breaker. Also, the cost of electronic information gathering, both in time and money, can be dramatically lower than the cost of talking to people, gathering physical evidence, and the other minutia required by traditional detective work. We should acknowledge that we do not and are never likely to have anonymity on the Internet. We would do better to think of different levels or kinds of identity. There are important moral and social issues arising as a result of these varying degrees and kinds of identity. Perhaps the most important matter is assuring that individuals are informed about the conditions in which they are interacting. Perhaps, even more important is that individuals have a choice about the conditions under which they are communicating. In the rest of this paper we explore a few examples of levels and kinds of identity that are practical on the Internet. We discuss the advantages and disadvantages that we see for these "styles" of identity for individuals, and we examine the costs and benefits of these styles for society as a whole. (shrink)
As we move our social institutions from paper and ink based operations to the electronic medium, we invisibly create a type of surveillance society, a panopticon society. It is not the traditional surveillance society in which government officials follow citizens around because they are concerned about threats to the political order. Instead it is piecemeal surveillance by public and private organizations. Piecemeal though it is, It creates the potential for the old kind of surveillance on an even grander scale. The (...) panopticon is the prison environment described by Foucault in which prison cells are arranged in a large circle with the side facing the inside of the circle open to view. The guard tower is placed in the middle of the circle so that guards can see everything that goes on in every cell. When we contemplate all the electronic data that is now gathered about each of us as we move through our everyday lives --- intelligent highway systems, consumer transactions, traffic patterns on the internet, medical records, financial records, and so on --- there seems little doubt that we are moving into a panopticon. The social issues that arise from this are too numerous to detail here, but data retention is an important part of it. In the paper-and-ink world, documents are filed, files are boxed, boxes are put away or thrown away. The capacity for data retrieval and manipulation is, thereby, limited by the sheer difficulty and cost of storing, finding, searching, and manipulating large numbers of paper files. This inconvenience functions as a mechanism whereby the system forgets past information, not unlike the way we ourselves forget. However, the story is very different in the digital world; digital information is easy to store, easy to search and manipulate, and inexpensive to keep over extensive periods of time. Digitalized information systems tend, therefore, to collect extensive ancillary information and to retain this information indefinitely. Such lack of forgetfulness is likely to hamper the ability of individuals to shed their past, and start over with a clean slate. Concerns about data retention were expressed in the early literature on the social impact of computing, but for the most part the issue has dropped from sight. Rarely, has the social good of discarding accumulated personal data been addressed. In this paper, we want to make the case by examining diverse cases in which retention of information by either business or governmental institutions hinders the ability of individuals to start over or to act autonomously. We hope our argument for the good of forgetfulness will challenge the standard framework in which such issues have traditionally been debated. The privacy debate exemplifies the traditional framework insofar as it has been characterized as involving an inherent tension between, on the one hand, the needs of organizations and institutions for more accurate and efficient information systems so as to further their goals and, on the other hand, the desire of individuals to have information about them kept private. Regan argues against this framing of the privacy issue in favor of one that recognizes the social importance of personal privacy. We will examine the non-forgetfulness of information systems as a problem threatening not just individual interests but social good as well. Cryptography is often cited as the technology that will give us privacy and mediate against surveillance. One of the uses of cryptography, encryption, will allow us, some hope, to create confidentiality and relationships of trust that will facilitate many of the social arrangements we now have and perhaps make them even more secure than they are now. Electronic cash, for example, could be created in such a form that it would have the anonymity associated now with hard cash Others are less optimistic of the potential for cryptography to re-create relationships of trust in the new medium. One important point that already seems clear is that even if encryption technology will protect the confidentiality and integrity of electronic transactions and data, it will NOT stop the observation of traffic patterns on networks. This seems an important distinction to put on the table. Our patterns of communication will continue to be available, no matter what is encrypted, and an amazing amount of information can be gleaned from this data. In a sense, it means content integrity but no anonymity. This will indubitably impact how we interact and with whom we interact. (shrink)
As we move our social institutions from paper and ink based operations to the electronic medium, we invisibly create a type of surveillance society, a panopticon society. It is not the traditional surveillance society in which government officials follow citizens around because they are concerned about threats to the political order. Instead it is piecemeal surveillance by public and private organizations. Piecemeal though it is, It creates the potential for the old kind of surveillance on an even grander scale. The (...) panopticon is the prison environment described by Foucault in which prison cells are arranged in a large circle with the side facing the inside of the circle open to view. The guard tower is placed in the middle of the circle so that guards can see everything that goes on in every cell. When we contemplate all the electronic data that is now gathered about each of us as we move through our everyday lives --- intelligent highway systems, consumer transactions, traffic patterns on the internet, medical records, financial records, and so on --- there seems little doubt that we are moving into a panopticon. The social issues that arise from this are too numerous to detail here, but data retention is an important part of it. In the paper-and-ink world, documents are filed, files are boxed, boxes are put away or thrown away. The capacity for data retrieval and manipulation is, thereby, limited by the sheer difficulty and cost of storing, finding, searching, and manipulating large numbers of paper files. This inconvenience functions as a mechanism whereby the system forgets past information, not unlike the way we ourselves forget. However, the story is very different in the digital world; digital information is easy to store, easy to search and manipulate, and inexpensive to keep over extensive periods of time. Digitalized information systems tend, therefore, to collect extensive ancillary information and to retain this information indefinitely. Such lack of forgetfulness is likely to hamper the ability of individuals to shed their past, and start over with a clean slate. Concerns about data retention were expressed in the early literature on the social impact of computing, but for the most part the issue has dropped from sight. Rarely, has the social good of discarding accumulated personal data been addressed. In this paper, we want to make the case by examining diverse cases in which retention of information by either business or governmental institutions hinders the ability of individuals to start over or to act autonomously. We hope our argument for the good of forgetfulness will challenge the standard framework in which such issues have traditionally been debated. The privacy debate exemplifies the traditional framework insofar as it has been characterized as involving an inherent tension between, on the one hand, the needs of organizations and institutions for more accurate and efficient information systems so as to further their goals and, on the other hand, the desire of individuals to have information about them kept private. Regan argues against this framing of the privacy issue in favor of one that recognizes the social importance of personal privacy. We will examine the non-forgetfulness of information systems as a problem threatening not just individual interests but social good as well. Cryptography is often cited as the technology that will give us privacy and mediate against surveillance. One of the uses of cryptography, encryption, will allow us, some hope, to create confidentiality and relationships of trust that will facilitate many of the social arrangements we now have and perhaps make them even more secure than they are now. Electronic cash, for example, could be created in such a form that it would have the anonymity associated now with hard cash Others are less optimistic of the potential for cryptography to re-create relationships of trust in the new medium. One important point that already seems clear is that even if encryption technology will protect the confidentiality and integrity of electronic transactions and data, it will NOT stop the observation of traffic patterns on networks. This seems an important distinction to put on the table. Our patterns of communication will continue to be available, no matter what is encrypted, and an amazing amount of information can be gleaned from this data. In a sense, it means content integrity but no anonymity. This will indubitably impact how we interact and with whom we interact. (shrink)
The first topic of concern is anonymity, specifically the anonymity that is available in communications on the Internet. An earlier paper argues that anonymity in electronic communication is problematic because: it makes law enforcement difficult ; it frees individuals to behave in socially undesirable and harmful ways ; it diminishes the integrity of information since one can't be sure who information is coming from, whether it has been altered on the way, etc.; and all three of the above contribute to (...) an environment of diminished trust which is not conducive to certain uses of computer communication. Counterbalancing these problems are some important benefits. Anonymity can facilitate some socially desirable and beneficial behavior. For example, it can eliminate the fear of repercussions for behavior in contexts in which repercussions would diminish the availability or reliability of information, e.g., voting, personal relationships between consenting adults, and the like. Furthermore, anonymity can be used constructively to reduce the effect of prejudices on communications. Negative aspects of anonymity all seem to point to a tension between accountability and anonymity. They suggest that accountability and anonymity are not compatible, and they even seem to suggest that since accountability is a good thing, it would be good to eliminate anonymity. In other words, the problems with anonymity suggest that individuals are more likely to behave in socially desirable ways when they are held accountable for their behavior, and they are more likely to engage in socially undesirable behavior when they are not held accountable. I am not going to take issue with the correlation between accountability and anonymity, but rather with the claim that accountability is good. To examine this problem, let's look at a continuum that stretches from total anonymity at one end, and no anonymity at all at the other end. At the opposite extreme of anonymity is a panopticon society. The panopticon is the prison environment described by Foucault in which prison cells are arranged in a large circle with the side facing the inside of the circle open to view. The guard tower is placed in the middle of the circle so that guards can see everything that goes on in every cell. In a recent article on privacy, Jeffrey Reiman, reflecting on the new intelligent highway systems, suggests that we are moving closer and closer to a panopticon society. When we contemplate all the electronic data that is now gathered about each one of us as we move through our everyday lives- intelligent highway systems, consumer transactions, traffic patterns on the internet, medical records, financial records, and so on- we see the trend that Reiman identifies. Electronic behavior is recorded and the information is retained. While actions/transactions in separate domains are not necessarily combined, it seems obvious that the potential exists for combining data into a complete portfolio of an individual's day to day life. So it would seem that as more and more activities and domains are moved into a IT-based medium, the closer we will come to a panopticon society. A panopticon society gives us the ultimate in accountability. Everything an individual does is observable and therefore available to those to whom we are accountable. Of course, in doing this, it puts us, in effect, in prison. The prison parallel is appropriate here because what anonymity allows us is freedom; prison is the ultimate in lack of freedom. In this way the arguments for a free society become arguments for anonymity. Only when individuals are free will they experiment, try new ideas, take risks, and learn by doing so. Only in an environment that tolerates making mistakes will individuals develop the active habits that are so essential for democracy. In a world without information technology, individuals have levels or degrees and various kinds of anonymity and consequently different levels and kinds of freedom. Degrees and kinds of anonymity vary with the domain: small town social life versus urban social life, voting, commercial exchanges, banking, automobile travel, airplane travel, telephone communication, education, and so on. Drawing from our experience before IT-based institutions, we might believe that what we need is varying levels or degrees and kinds of anonymity. This seems a good starting place because it suggests an attempt to re-create the mixture that we have in the physical, non-IT-based world. Nevertheless, there is a danger. If we think in terms of levels and degrees of anonymity, we may not see the forest from the trees. We may not acknowledge that in an electronic medium, levels and kinds of anonymity mean, in an important sense, no anonymity. If there are domains in which we can be anonymous but those domains are part of a global communication infrastructure in which there is no anonymity at the entry point, then it will always be possible to trace someone's identity. We delude ourselves when we think we have anonymity on-line or off-line. Rather, what we have both places is situations in which it is more and less difficult to identify individuals. We have a continuum of situations in which it is easy and difficult to link behavior to other behavior and histories of behaviors. In the physical world, we can go places and do things where others don't know us by name and have no history with us, though they see our bodies, clothes, and behavior. If we do nothing unusual, we may be forgotten. On the other hand, if we do something illegal, authorities may attempt to track us down and figure out who we are. For example, law enforcement officials, collection agencies, those who want to sue us may take an active interest in removing our anonymity, ex post facto. Think of Timothy McVeigh and Terry Nichols-the men who apparently bombed the federal building in Oklahoma City. Much of what they did, they did anonymously, but then law enforcement officials set out to find out who had done various things, e.g., rented a car, bought explosives, etc. The shrouds of anonymity under which McVeigh and Nichols had acted were slowly removed. Is this any different than behavior on the internet? Is there a significant difference in the kind or degree of anonymity we have in the physical world versus what we have in an IT-based world? The character of the trail we leave is different; in the one case, its an electronic trail; while in the other it involves human memories, photographs, and paper and ink. What law enforcement officials had to do to track down McVeigh is quite different from tracking down an electronic law breaker. Also, the cost of electronic information gathering, both in time and money, can be dramatically lower than the cost of talking to people, gathering physical evidence, and the other minutia required by traditional detective work. We should acknowledge that we do not and are never likely to have anonymity on the Internet. We would do better to think of different levels or kinds of identity. There are important moral and social issues arising as a result of these varying degrees and kinds of identity. Perhaps the most important matter is assuring that individuals are informed about the conditions in which they are interacting. Perhaps, even more important is that individuals have a choice about the conditions under which they are communicating. In the rest of this paper we explore a few examples of levels and kinds of identity that are practical on the Internet. We discuss the advantages and disadvantages that we see for these "styles" of identity for individuals, and we examine the costs and benefits of these styles for society as a whole. (shrink)