An Institutionalist Approach to AI Ethics: Justifying the Priority of Government Regulation over Self-Regulation

Moral Philosophy and Politics 9 (2):239-265 (2022)
  Copy   BIBTEX

Abstract

This article explores the cooperation of government and the private sector to tackle the ethical dimension of artificial intelligence. The argument draws on the institutionalist approach in philosophy and business ethics defending a ‘division of moral labor’ between governments and the private sector. The goal and main contribution of this article is to explain how this approach can provide ethical guidelines to the AI industry and to highlight the limits of self-regulation. In what follows, I discuss three institutionalist claims. First, principles of AI ethics should be validated through legitimate democratic processes. Second, compliance with these principles should be secured in a stable way. Third, their implementation in practice should be as efficient as possible. If we accept these claims, there are good reasons to conclude that, in many cases, governments implementing hard regulation are in principle the best instruments to secure an ethical development of AI systems. Where adequate regulation exists, firms should respect the law. But when regulation does not yet exist, helping governments build adequate regulation should be businesses’ ethical priority, not self-regulation.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,745

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2022-10-06

Downloads
34 (#123,329)

6 months
11 (#1,140,922)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Thomas Ferretti
University of Greenwich