Evaluating Risks of Astronomical Future Suffering: False Positives vs. False Negatives Regarding Artificial Sentience

Abstract

Failing to recognise sentience in AI systems (false negatives) poses a far greater risk of potentially astronomical suffering compared to mistakenly attributing sentience to non-sentient systems (false positives). This paper analyses the issue through the moral frameworks of longtermism, utilitarianism, and deontology, concluding that all three assign greater urgency to avoiding false negatives. Given the astronomical number of AIs that may exist in the future, even a small chance of overlooking sentience is an unacceptable risk. To address this, the paper proposes a comprehensive approach including research, field-building, and tentative policy development. Humanity must take steps to ensure the well-being of all sentient minds, both biological and artificial.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Analytics

Added to PP
2024-04-04

Downloads
187 (#19,301)

6 months
187 (#106,322)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references