AI and the Law: Can Legal Systems Help Us Maximize Paperclips while Minimizing Deaths?

In Technology Ethics: A Philosophical Introduction and Readings (forthcoming)
  Copy   BIBTEX

Abstract

This Chapter provides a short undergraduate introduction to ethical and philosophical complexities surrounding the law’s attempt (or lack thereof) to regulate artificial intelligence. Swedish philosopher Nick Bostrom proposed a simple thought experiment known as the paperclip maximizer. What would happen if a machine (the “PCM”) were given the sole goal of manufacturing as many paperclips as possible? It might learn how to transact money, source metal, or even build factories. The machine might also eventually realize that humans pose a threat. Humans could turn the machine off at any point, and then it wouldn’t be able to make as many paperclips as possible! Taken to the logical extreme, the result is quite grim—the PCM might even start using humans as raw material for paperclips. The predicament only deepens once we realize that Bostrom’s thought experiment overlooks a key player. The PCM and algorithms like it do not arise spontaneously (at least, not yet). Most likely, some corporation—say, Office Corp.—designed, owns, and runs the PCM. The more paperclips the PCM manufactures, the more profits Office Corp. makes, even if that entails converting some humans (but preferably not customers!) into raw materials. Less dramatically, Office Corp. may also make more money when PCM engages in other socially sub-optimal behaviors that would otherwise violate the law, like money laundering, sourcing materials from endangered habitats, manipulating the market for steel, or colluding with competitors over prices. The consequences are predictable and dire. If Office Corp. isn’t held responsible, it will not stop with the PCM. Office Corp. would have every incentive to develop more maximizers—say for papers, pencils, and protractors. This chapter issues a challenge for tech ethicists, social ontologists, and legal theorists: How can the law help mitigate algorithmic harms without overly compromising the potential that AI has to make us all healthier, wealthier, and wiser? The answer is far from straightforward.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,127

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Love in the time of AI.Amy Kind - 2021 - In Barry Dainton, Attila Tanyi & Will Slocombe (eds.), Minding the Future: Artificial Intelligence, Philosophical Visions and Science Fiction. pp. 89-106.
Legal personhood for artificial intelligences.Lawrence B. Solum - 1992 - North Carolina Law Review 70:1231.
Two arguments against human-friendly AI.Ken Daley - 2021 - AI and Ethics 1 (1):435-444.
The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.

Analytics

Added to PP
2022-10-04

Downloads
44 (#372,384)

6 months
17 (#161,763)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Mihailis E. Diamantis
University of Iowa

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references