From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence

Ethics and Information Technology 22 (4):321-333 (2020)
  Copy   BIBTEX

Abstract

As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is because the concept of responsibility, despite being treated as such, is not monolithic: rather this seemingly unified concept consists of converging and confluent concepts that shape the idea of what we colloquially call responsibility. From a different perspective, robotics will be simultaneously responsible and irresponsible depending on the particular concept of responsibility that is foregrounded: an observation that cuts against the grain of the drive towards responsible robotics. This problem is further compounded by responsible design and development as contrasted to responsible use. From a different perspective, the difficulty in defining the concept of responsibility in robotics is because human responsibility is the main frame of reference. Robotic systems are increasingly expected to achieve the human-level performance, including the capacities associated with responsibility and other criteria which are necessary to act responsibly. This subsists within a larger phenomenon where the difference between humans and non-humans, be it animals or artificial systems, appears to be increasingly blurred, thereby disrupting orthodox understandings of responsibility. This paper seeks to supplement the responsible robotics impulse by proposing a complementary set of human rights directed specifically against the harms arising from robotic and artificial intelligence technologies. The relationship between responsibilities of the agent and the rights of the patient suggest that a rights regime is the other side of responsibility coin. The major distinction of this approach is to invert the power relationship: while human agents are perceived to control robotic patients, the prospect for this to become reversed is beginning. As robotic technologies become ever more sophisticated, and even genuinely complex, asserting human rights directly against robotic harms become increasingly important. Such an approach includes not only developing human rights that ‘protect’ humans but also ‘strengthen’ people against the challenges introduced by robotics and AI [This distinction parallels Berlin’s negative and positive concepts of liberty ], by emphasising the social and reflective character of the notion of humanness as well as the difference between the human and nonhuman. This will allow using the human frame of reference as constitutive of, rather than only subject to, the robotic and AI technologies, where it is human and not technology characteristics that shape the human rights framework in the first place.

Similar books and articles

Sustainability of Artificial Intelligence: Reconciling human rights with legal rights of robots.Ammar Younas & Rehan Younas - forthcoming - In Zhyldyzbek Zhakshylykov & Aizhan Baibolot (eds.), Quality Time 18. International Alatoo University Kyrgyzstan. pp. 25-28.
Responsibility Practices and Unmanned Military Technologies.Merel Noorman - 2014 - Science and Engineering Ethics 20 (3):809-826.

Analytics

Added to PP
2017-12-02

Downloads
314 (#67,938)

6 months
95 (#54,040)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

H. C. B. Liu
University of California, Berkeley

References found in this work

Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
Justice for hedgehogs.Ronald Dworkin - 2011 - Cambridge, Mass.: Belknap Press of Harvard University Press.
Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
Risk Society: Towards a New Modernity.Ulrich Beck, Mark Ritter & Jennifer Brown - 1993 - Environmental Values 2 (4):367-368.

View all 36 references / Add more references