Learning Plans without a priori Knowledge
Abstract
This paper is concerned with autonomous learning of plans in probabilistic domains without a priori domain-specific knowledge. In contrast to existing reinforcement learning algorithms that generate only reactive plans and existing probabilistic planning algorithms that require a substantial amount of a priori knowledge in order to plan, a two-stage bottom-up process is devised, in which first reinforcement learning/dynamic programming is applied, without the use of a priori domain-specific knowledge, to acquire a reactive plan and then explicit plans are extracted from the reactive plan. Several options for plan extraction are examined, each of which is based on a beam search that performs temporal projection in a restricted fashion, guided by the value functions resulting from reinforcement learning/dynamic programming. Some completeness and soundness results are given. Examples in several domains are discussed that together demonstrate the working of the proposed model