Oscar: A cognitive architecture for intelligent agents
Abstract
The “grand problem” of AI has always been to build artificial agents with human-like intelligence. That is the stuff of science fiction, but it is also the ultimate aspiration of AI. In retrospect, we can understand what a difficult problem this is, so since its inception AI has focused more on small manageable problems, with the hope that progress there will have useful implications for the grand problem. Now there is a resurgence of interest in tackling the grand problem head-on. Perhaps AI has made enough progress on the little problems that we can fruitfully address the big problem. The objective is to build agents of human-level intelligence capable of operating in environments of real-world complexity. I will refer to these as GIAs — “generally intelligent agents”. OSCAR is a cognitive architecture for GIAs, implemented in LISP.1 OSCAR draws heavily on my work in philosophy concerning both epistemology (Pollock 1974, 1986, 1990, 1995, 1998, 2008b, 2008; Pollock and Cruz 1999; Pollock and Oved, 2005) and rational decision making (2005, 2006, 2006a).