Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and (...) so on forever: that any sequence of organisms (each one a child of the previous) must contain occasional multi-parent organisms, or must terminate. By proving that a certain measure (arguably an intelligence measure) decreases when an idealized parent AGI single-handedly creates a child AGI, we argue that a similar Law holds for AGIs. (shrink)
We define a notion of the intelligence level of an idealized mechanical knowing agent. This is motivated by efforts within artificial intelligence research to define real-number intelligence levels of compli- cated intelligent systems. Our agents are more idealized, which allows us to define a much simpler measure of intelligence level for them. In short, we define the intelligence level of a mechanical knowing agent to be the supremum of the computable ordinals that have codes the agent knows to be codes (...) of computable ordinals. We prove that if one agent knows certain things about another agent, then the former necessarily has a higher intelligence level than the latter. This allows our intelligence no- tion to serve as a stepping stone to obtain results which, by themselves, are not stated in terms of our intelligence notion (results of potential in- terest even to readers totally skeptical that our notion correctly captures intelligence). As an application, we argue that these results comprise evidence against the possibility of intelligence explosion (that is, the no- tion that sufficiently intelligent machines will eventually be capable of designing even more intelligent machines, which can then design even more intelligent machines, and so on). (shrink)
Legg and Hutter, as well as subsequent authors, considered intelligent agents through the lens of interaction with reward-giving environments, attempting to assign numeric intelligence measures to such agents, with the guiding principle that a more intelligent agent should gain higher rewards from environments in some aggregate sense. In this paper, we consider a related question: rather than measure numeric intelligence of one Legg- Hutter agent, how can we compare the relative intelligence of two Legg-Hutter agents? We propose an elegant answer (...) based on the following insight: we can view Legg-Hutter agents as candidates in an election, whose voters are environments, letting each environment vote (via its rewards) which agent (if either) is more intelligent. This leads to an abstract family of comparators simple enough that we can prove some structural theorems about them. It is an open question whether these structural theorems apply to more practical intelligence measures. (shrink)
After generalizing the Archimedean property of real numbers in such a way as to make it adaptable to non-numeric structures, we demonstrate that the real numbers cannot be used to accurately measure non-Archimedean structures. We argue that, since an agent with Artificial General Intelligence (AGI) should have no problem engaging in tasks that inherently involve non-Archimedean rewards, and since traditional reinforcement learning rewards are real numbers, therefore traditional reinforcement learning probably will not lead to AGI. We indicate two possible ways (...) traditional reinforcement learning could be altered to remove this roadblock. (shrink)
We propose that, for the purpose of studying theoretical properties of the knowledge of an agent with Artificial General Intelligence (that is, the knowledge of an AGI), a pragmatic way to define such an agent’s knowledge (restricted to the language of Epistemic Arithmetic, or EA) is as follows. We declare an AGI to know an EA-statement φ if and only if that AGI would include φ in the resulting enumeration if that AGI were commanded: “Enumerate all the EA-sentences which you (...) know.” This definition is non-circular because an AGI, being capable of practical English communication, is capable of understanding the everyday English word “know” independently of how any philosopher formally defines knowledge; we elaborate further on the non-circularity of this circular-looking definition. This elegantly solves the problem that different AGIs may have different internal knowledge definitions and yet we want to study knowledge of AGIs in general, without having to study different AGIs separately just because they have separate internal knowledge definitions. Finally, we suggest how this definition of AGI knowledge can be used as a bridge which could allow the AGI research community to import certain abstract results about mechanical knowing agents from mathematical logic. (shrink)
Can an agent's intelligence level be negative? We extend the Legg-Hutter agent-environment framework to include punishments and argue for an affirmative answer to that question. We show that if the background encodings and Universal Turing Machine (UTM) admit certain Kolmogorov complexity symmetries, then the resulting Legg-Hutter intelligence measure is symmetric about the origin. In particular, this implies reward-ignoring agents have Legg-Hutter intelligence 0 according to such UTMs.
This work has been selected by scholars as being culturally important, and is part of the knowledge base of civilization as we know it. This work was reproduced from the original artifact, and remains as true to the original work as possible. Therefore, you will see the original copyright references, library stamps, and other notations in the work. This work is in the public domain in the United States of America, and possibly other nations. Within the United States, you may (...) freely copy and distribute this work, as no entity has a copyright on the body of the work. As a reproduction of a historical artifact, this work may contain missing or blurred pages, poor pictures, errant marks, etc. Scholars believe, and we concur, that this work is important enough to be preserved, reproduced, and made generally available to the public. We appreciate your support of the preservation process, and thank you for being an important part of keeping this knowledge alive and relevant. (shrink)
A variation of Fitch’s paradox is given, where no special rules of inference are assumed, only axioms. These axioms follow from the familiar assumptions which involve rules of inference. We show (by constructing a model) that by allowing that possibly the knower doesn’t know his own soundness (while still requiring he be sound), Fitch’s paradox is avoided. Provided one is willing to admit that sound knowers may be ignorant of their own soundness, this might offer a way out of the (...) paradox. (shrink)
In 2011, Hibbard suggested an intelligence measure for agents who compete in an adversarial sequence prediction game. We argue that Hibbard’s idea should actually be considered as two separate ideas: first, that the intelligence of such agents can be measured based on the growth rates of the runtimes of the competitors that they defeat; and second, one specific (somewhat arbitrary) method for measuring said growth rates. Whereas Hibbard’s intelligence measure is based on the latter growth-rate-measuring method, we survey other methods (...) for measuring function growth rates, and exhibit the resulting Hibbard-like intelligence measures and taxonomies. Of particular interest, we obtain intelligence taxonomies based on Big-O and Big-Theta notation systems, which taxonomies are novel in that they challenge conventional notions of what an intelligence measure should look like. We discuss how intelligence measurement of sequence predictors can indirectly serve as intelligence measurement for agents with Artificial General Intelligence (AGIs). (shrink)
Do we live in a computer simulation? I will present an argument that the results of a certain experiment constitute empirical evidence that we do not live in, at least, one type of simulation. The type of simulation ruled out is very specific. Perhaps that is the price one must pay to make any kind of Popperian progress.
We argue that C. Darwin and more recently W. Hennig worked at times under the simplifying assumption of an eternal biosphere. So motivated, we explicitly consider the consequences which follow mathematically from this assumption, and the infinite graphs it leads to. This assumption admits certain clusters of organisms which have some ideal theoretical properties of species, shining some light onto the species problem. We prove a dualization of a law of T.A. Knight and C. Darwin, and sketch a decomposition result (...) involving the internodons of D. Kornet, J. Metz and H. Schellinx. A further goal of this paper is to respond to B. Sturmfels’ question, “Can biology lead to new theorems?”. (shrink)
The Voluntary Simplicity Movement can be understood broadly as a diverse social movement made up of people who are resisting high consumption lifestyles and who are seeking, in various ways, a lower consumption but higher quality of life alternative. The central argument of this paper is that the Voluntary Simplicity Movement or something like it will almost certainly need to expand, organise, radicalise and politicise, if anything resembling a degrowth society is to emerge in law through democratic processes. In a (...) sentence, that is the 'grass-roots' or 'bottom up' theory of legal and political transformation that will be expounded and defended in this paper. The essential reasoning here is that legal, political and economic structures will never reflect a post-growth ethics of macro-economic sufficiency until a post-consumerist ethics of micro-economic sufficiency is embraced and mainstreamed at the cultural level. (shrink)
Reinhardt’s conjecture, a formalization of the statement that a truthful knowing machine can know its own truthfulness and mechanicalness, was proved by Carlson using sophisticated structural results about the ordinals and transfinite induction just beyond the first epsilon number. We prove a weaker version of the conjecture, by elementary methods and transfinite induction up to a smaller ordinal.
I present an argument that for any computer-simulated civilization we design, the mathematical knowledge recorded by that civilization has one of two limitations. It is untrustworthy, or it is weaker than our own mathematical knowledge. This is paradoxical because it seems that nothing prevents us from building in all sorts of advantages for the inhabitants of said simulation.
Elementary patterns of resemblance notate ordinals up to the ordinal of Pi^1_1-CA_0. We provide ordinal multiplication and exponentiation algorithms using these notations.
We study the structure of families of theories in the language of arithmetic extended to allow these families to refer to one another and to themselves. If a theory contains schemata expressing its own truth and expressing a specific Turing index for itself, and contains some other mild axioms, then that theory is untrue. We exhibit some families of true self-referential theories that barely avoid this forbidden pattern.
In their thought-provoking paper, Legg and Hutter consider a certain abstrac- tion of an intelligent agent, and define a universal intelligence measure, which assigns every such agent a numerical intelligence rating. We will briefly summarize Legg and Hutter’s paper, and then give a tongue-in-cheek argument that if one’s goal is to become more intelligent by cultivating music appreciation, then it is bet- ter to use classical music (such as Bach, Mozart, and Beethoven) than to use more recent pop music. The (...) same argument could be adapted to other media: books beat films, card games beat first-person shooters, parables beat dissertations, etc. We leave it to the reader to decide whether this argument tells us something about classical music, something about Legg-Hutter intelligence, or something about both. (shrink)
Can you find an xy-equation that, when graphed, writes itself on the plane? This idea became internet-famous when a Wikipedia article on Tupper’s self-referential formula went viral in 2012. Under scrutiny, the question has two flaws: it is meaningless (it depends on fonts) and it is trivial. We fix these flaws by formalizing the problem.
In his dissertation, Wadge defined a notion of guessability on subsets of the Baire space and gave two characterizations of guessable sets. A set is guessable if and only if it is in the second ambiguous class, if and only if it is eventually annihilated by a certain remainder. We simplify this remainder and give a new proof of the latter equivalence. We then introduce a notion of guessing with an ordinal limit on how often one can change one’s mind. (...) We show that for every ordinal $\alpha$, a guessable set is annihilated by $\alpha$ applications of the simplified remainder if and only if it is guessable with fewer than $\alpha$ mind changes. We use guessability with fewer than $\alpha$ mind changes to give a semi-characterization of the Hausdorff difference hierarchy, and indicate how Wadge’s notion of guessability can be generalized to higher-order guessability, providing characterizations of ${\mathbf{\Delta}}^{0}_{\alpha}$ for all successor ordinals $\alpha\gt 1$. (shrink)
A biologically unavoidable sequence is an infinite gender sequence which occurs in every gendered, infinite genealogical network satisfying certain tame conditions. We show that every eventually periodic sequence is biologically unavoidable (this generalizes König's Lemma), and we exhibit some biologically avoidable sequences. Finally we give an application of unavoidable sequences to cellular automata.