Abstract
I sketch my general model of the roles of intentions in the planning of agents like us-agents with substantial resource limitations and with important needs for coordination. I then focus on the stability of prior intentions: their rational resistance to reconsideration. I emphasize the importance of cases in which one's nonreconsideration of a prior intention is nondeliberative and is grounded in relevant habits of reconsideration. Concerning such cases I argue for a limited form of two-tier consequentialism, one that is restricted in ways that aim at blocking an analogue of Smart's concerns about rule-worship. I contrast this with the unrestricted two-tier consequentialism suggested by McClennen. I argue that my restricted approach is superior for a theory of the practical rationality of reflective, planning agents like us. But I also conjecture that an unrestricted two-tier consequentialism may be more appropriate for the AI project of specifying a high level architecture for a resource-bounded planner.