Abstract
This paper is focused on the Sapient and Sentient Intelligence Value Argument or SSIVA and the ethics of how that applies to autonomous systems and how such systems might be governed by the extension of current regulation, as well as providing a computable model of ethics for AGI research. SSIVA is based on some static core definitions of "Intelligence" as defined by the measured ability to understand, use, and generate knowledge or information independently, all of which are a function of sapience and sentience. The SSIVA logic places the value of any individual human and their potential for Intelligence, and the value of other systems to the degree that they are self-aware or "intelligent", as a priority. Further, the paper lays out the case for how the current legal framework could be extended to address issues with autonomous systems to varying degrees depending on the SSIVA threshold as applied to autonomous systems. Further, from a research standpoint it is important to have a descreet model to measure against without ambiguity, which the SSIVA theory provides.