Abstract
This article delineates two dimensions along which computational models of face processing may vary, and briefly review three such models, the Dailey and Cottrell model; the O'Reilly and Munakata model; and the Riesenhuber and Poggio. It focuses primarily on one of the models and shows how this model is used to reveal potential mechanisms underlying the neural processing of faces and objects—the development of a specialized face processor, how it could be recruited for other domains, hemispheric lateralization of face processing, facial expression processing, and the development of face discrimination. It turns to the Riesenhuber and Poggio model to describe the elegant way it has been used to predict functional magnetic resonance imaging data on face processing. The overall strategy of these modeling efforts is to sample problems that are constrained by neurophysiological and behavioral data, and to stress the ways in which models can generate novel hypotheses about the way humans process faces.