Abstract
Most standard results on structure identification in first order theories depend upon the correctness and completeness (in the limit) of the data, which are provided to the learner. These assumption are essential for the reliability of inductive methods and for their limiting success (convergence to the truth). The paper investigates inductive inference from (possibly) incorrect and incomplete data. It is shown that such methods can be reliable not in the sense of truth approximation, but in the sense that the methods converge to "empirically adequate" theories, i.e. theories, which are consistent with all data (past and future) and complete with respect to a given complexity class of L-sentences. Adequate theories of bounded complexity can be inferred uniformly and effectively by polynomial-time learning algorithms. Adequate theories of unbounded complexity can be inferred pointwise by less efficient methods