Abstract
Scholars, policymakers and organizations in the EU, especially at the level of the European Commission, have turned their attention to the ethics of (trustworthy and human-centric) Artificial Intelligence (AI). However, there has been little reflexivity on (1) the history of the ethics of AI as an institutionalized phenomenon and (2) the comparison to similar episodes of “ethification” in other fields, to highlight common (unresolved) challenges.Contrary to some mainstream narratives, which stress how the increasing attention to ethical aspects of AI is due to the fast pace and increasing risks of technological developments, Science and Technology Studies(STS)-informed perspectives highlight that the rise of institutionalized assessment methods indicates a need for governments to gain more control of scientific research and to bring EU institutions closer to the public on controversies related to emerging technologies.This article analyzes how different approaches of the recent past (i.e. bioethics, technology assessment (TA) and ethical, legal and social (ELS) research, Responsible Research and Innovation (RRI)) followed one another, often “in the name of ethics”, to address previous criticisms and/or to legitimate certain scientific and technological research programs. The focus is on how a brief history of the institutionalization of these approaches can provide insights into present challenges to the ethics of AI related to methodological issues, mobilization of expertise and public participation.