Learning Plans without a priori Knowledge

Abstract

This paper is concerned with autonomous learning of plans in probabilistic domains without a priori domain-specific knowledge. In contrast to existing reinforcement learning algorithms that generate only reactive plans and existing probabilistic planning algorithms that require a substantial amount of a priori knowledge in order to plan, a two-stage bottom-up process is devised, in which first reinforcement learning/dynamic programming is applied, without the use of a priori domain-specific knowledge, to acquire a reactive plan and then explicit plans are extracted from the reactive plan. Several options for plan extraction are examined, each of which is based on a beam search that performs temporal projection in a restricted fashion, guided by the value functions resulting from reinforcement learning/dynamic programming. Some completeness and soundness results are given. Examples in several domains are discussed that together demonstrate the working of the proposed model

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,440

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Analytics

Added to PP
2012-09-05

Downloads
14 (#971,788)

6 months
2 (#1,229,212)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references