Switch to: References

Add citations

You must login to add citations.
  1. Should we welcome robot teachers?Amanda J. C. Sharkey - 2016 - Ethics and Information Technology 18 (4):283-297.
    Current uses of robots in classrooms are reviewed and used to characterise four scenarios: Robot as Classroom Teacher; Robot as Companion and Peer; Robot as Care-eliciting Companion; and Telepresence Robot Teacher. The main ethical concerns associated with robot teachers are identified as: privacy; attachment, deception, and loss of human contact; and control and accountability. These are discussed in terms of the four identified scenarios. It is argued that classroom robots are likely to impact children’s’ privacy, especially when they masquerade as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Could you hate a robot? And does it matter if you could?Helen Ryland - 2021 - AI and Society 36 (2):637-649.
    This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • It’s Friendship, Jim, but Not as We Know It: A Degrees-of-Friendship View of Human–Robot Friendships.Helen Ryland - 2021 - Minds and Machines 31 (3):377-393.
    This article argues in defence of human–robot friendship. I begin by outlining the standard Aristotelian view of friendship, according to which there are certain necessary conditions which x must meet in order to ‘be a friend’. I explain how the current literature typically uses this Aristotelian view to object to human–robot friendships on theoretical and ethical grounds. Theoretically, a robot cannot be our friend because it cannot meet the requisite necessary conditions for friendship. Ethically, human–robot friendships are wrong because they (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Responsible research for the construction of maximally humanlike automata: the paradox of unattainable informed consent.Lantz Fleming Miller - 2020 - Ethics and Information Technology 22 (4):297-305.
    Since the Nuremberg Code and the first Declaration of Helsinki, globally there has been increasing adoption and adherence to procedures for ensuring that human subjects in research are as well informed as possible of the study’s reasons and risks and voluntarily consent to serving as subject. To do otherwise is essentially viewed as violation of the human research subject’s legal and moral rights. However, with the recent philosophical concerns about responsible robotics, the limits and ambiguities of research-subjects ethical codes become (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Human Rights of Users of Humanlike Care Automata.Lantz Fleming Miller - 2020 - Human Rights Review 21 (2):181-205.
    Care is more than dispensing pills or cleaning beds. It is about responding to the entire patient. What is called “bedside manner” in medical personnel is a quality of treating the patient not as a mechanism but as a being—much like the caregiver—with desires, ideas, dreams, aspirations, and the gamut of mental and emotional character. As automata, answering an increasing functional need in care, are designed to enact care, the pressure is on their becoming more humanlike to carry out the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark