Abstract
Rational agents are important objects of study in several research communities, including economics, philosophy, cognitive science, and most recently computer science and artificial intelligence. Crudely, a rational agent is an entity that is capable of acting on its environment, and which chooses to act in such a way as to further its own best interests. There has recently been much interest in the use of mathematical logic for developing formal theories of such agents. Such theories view agents as practical reasoning systems, deciding moment by moment which action to perform next, given the beliefs they have about the world and their desires with respect to how they would like the world to be. In this article, we survey the state of the art in developing logical theories of rational agency. Following a discussion on the dimensions along which such theories can vary, we briefly survey the logical tools available in order to construct such theories. We then review and critically assess three of the best known theories of rational agency: Cohen and Levesque's intention logic, Rao and Georgeff's BDI logics, and the KARO framework of Meyer et al. We then discuss the various roles that such logics can play in helping us to engineer rational agents, and conclude with a discussion of open problems