Switch to: References

Add citations

You must login to add citations.
  1. Can AI determine its own future?Aybike Tunç - forthcoming - AI and Society:1-12.
    This article investigates the capacity of artificial intelligence (AI) systems to claim the right to self-determination while exploring the prerequisites for individuals or entities to exercise control over their own destinies. The paper delves into the concept of autonomy as a fundamental aspect of self-determination, drawing a distinction between moral and legal autonomy and emphasizing the pivotal role of dignity in establishing legal autonomy. The analysis examines various theories of dignity, with a particular focus on Hannah Arendt’s perspective. Additionally, the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  • Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   17 citations