Switch to: References

Add citations

You must login to add citations.
  1. Why robots should not be treated like animals.Deborah G. Johnson & Mario Verdicchio - 2018 - Ethics and Information Technology 20 (4):291-301.
    Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • There’s something in your eye: ethical implications of augmented visual field devices.Marty J. Wolf, Frances S. Grodzinsky & Keith W. Miller - 2016 - Journal of Information, Communication and Ethics in Society 14 (3):214-230.
    Purpose This paper aims to explore the ethical and social impact of augmented visual field devices, identifying issues that AVFDs share with existing devices and suggesting new ethical and social issues that arise with the adoption of AVFDs. Design/methodology/approach This essay incorporates both a philosophical and an ethical analysis approach. It is based on Plato’s Allegory of the Cave, philosophical notions of transparency and presence and human values including psychological well-being, physical well-being, privacy, deception, informed consent, ownership and property and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • We need to talk about deception in social robotics!Amanda Sharkey & Noel Sharkey - 2020 - Ethics and Information Technology 23 (3):309-316.
    Although some authors claim that deception requires intention, we argue that there can be deception in social robotics, whether or not it is intended. By focusing on the deceived rather than the deceiver, we propose that false beliefs can be created in the absence of intention. Supporting evidence is found in both human and animal examples. Instead of assuming that deception is wrong only when carried out to benefit the deceiver, we propose that deception in social robotics is wrong when (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Should my robot know what's best for me? Human–robot interaction between user experience and ethical design.Nora Fronemann, Kathrin Pollmann & Wulf Loh - 2022 - AI and Society 37 (2):517-533.
    To integrate social robots in real-life contexts, it is crucial that they are accepted by the users. Acceptance is not only related to the functionality of the robot but also strongly depends on how the user experiences the interaction. Established design principles from usability and user experience research can be applied to the realm of human–robot interaction, to design robot behavior for the comfort and well-being of the user. Focusing the design on these aspects alone, however, comes with certain ethical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Trusting the (ro)botic other.Paul B. de Laat - 2015 - Acm Sigcas Computers and Society 45 (3):255-260.
    How may human agents come to trust artificial agents? At present, since the trust involved is non-normative, this would seem to be a slow process, depending on the outcomes of the transactions. Some more options may soon become available though. As debated in the literature, humans may meet bots as they are embedded in an institution. If they happen to trust the institution, they will also trust them to have tried out and tested the machines in their back corridors; as (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark