Beyond simple rule extraction: The extraction of planning knowledge from reinforcement learners
Abstract
Abstra,ct— This paper will discuss learning in hybrid models that goes beyond simple rule extraction from backpropagation networks. Although simple rule extraction has received a lot of research attention, to further develop hybrid learning models that include both symbolic and subsymbolic knowledge and that learn autonomously, it is necessary to study autonomous learning of both subsymbolic and symbolic knowledge in integrated architectures. This paper will describe knowledge extraction from neural reinforcement learning. It includes two approaches towards extracting plan knowledge: the extraction of explicit, symbolic rules from neural reinforcement learning, and the extraction of complete plans. This work points to the creation of a general framework for achieving the subsymbolic to symbolic transition in an integrated autonomous learning framework.