Abstract
An unknown process is generating a sequence of symbols, drawn from an alphabet, A. Given an initial segment of the sequence, how can one predict the next symbol? Ray Solomonoff’s theory of inductive reasoning rests on the idea that a useful estimate of a sequence’s true probability of being outputted by the unknown process is provided by its algorithmic probability (its probability of being outputted by a species of probabilistic Turing machine). However algorithmic probability is a “semimeasure”: i.e., the sum, over all x∈A, of the conditional algorithmic probabilities of the next symbol being x, may be less than 1. Prevailing wisdom has it that algorithmic probability must be normalized, to eradicate this semimeasure property, before it can yield acceptable probability estimates. This paper argues, to the contrary, that the semimeasure property contributes substantially to the power and scope of an algorithmic-probability-based theory of induction, and that normalization is unnecessary.