Research in face recognition has tended to focus on discriminating between individuals, or “telling people apart.” It has recently become clear that it is also necessary to understand how images of the same person can vary, or “telling people together.” Learning a new face, and tracking its representation as it changes from unfamiliar to familiar, involves an abstraction of the variability in different images of that person's face. Here, we present an application of principal components analysis computed across different photos (...) of the same person. We demonstrate that people vary in systematic ways, and that this variability is idiosyncratic—the dimensions of variability in one face do not generalize well to another. Learning a new face therefore entails learning how that face varies. We present evidence for this proposal and suggest that it provides an explanation for various effects in face recognition. We conclude by making a number of testable predictions derived from this framework. (shrink)
This article describes some differences between familiar and unfamiliar face processing. It presents the evidence that unfamiliar face recognition is poor. Since this poor performance has implications both practically and theoretically, it is important to establish the facts. The article analyses reasons that people appear to have little insight into their own poor performance with unfamiliar faces, and some sectors of society seem so keen to use faces as a means of proving identity. It reviews some historical research comparing familiar (...) and unfamiliar face processing. The study shows evidence for the assertion that despite eliminating the memory-load of normal eyewitness situations, it turns out that people are surprisingly bad at matching two images of the same unfamiliar person. Finally, it suggests that the modern tendency to conflate familiar and unfamiliar face processing, and to theorize about “face recognition” in general, lies at the heart of practical failures in this field. (shrink)
It is well established that retrieval of names is harder than the retrieval of other identity specific information. This paper offers a review of the more influential accounts put forward as explanations of why names are so difficult to retrieve. A series of five experiments tests a number of these accounts.Experiments One to Three examine the claims that names are hard to recall because they are typically meaningless, or unique. Participants are shown photographs of unfamiliar people or familiar people and (...) given three pieces of information about each: a name, a unique piece of information, and a shared piece of information. Learning follows an incidental procedure, and participants are given a surprise recall test. In each experiment shared information is recalled most often, followed by unique information, followed by name. Experiment Four tests both the `uniqueness' account and an account based on the specificity of the naming response. Participants are presented with famous faces and asked to categorise them by semantic group. Results indicate that less time is needed to perform this task when the group is a subset of a larger semantic category. A final experiment examines the claim that names might take longer to access because they are less often retrieved than other classes of information. Latencies show that participants remain more efficient when categorising faces by their occupation than by their name even when they have received extra practice of naming the faces.We conclude that the explanation best able to account for the data is that names are stored separately from other semantic information and can only be accessed after other identity specific information has been retrieved. However, we also argue that the demands we make of these explanations make it likely that no single theory will be able to account for all existing data. (shrink)
Distributed representations can be distributed in very many ways. The specific choice of representation for a specific model is based on considerations unique to the area of study. General statements about the effectiveness of distributed models are therefore of little value. The popularity of these models is discussed, particularly with respect to reporting conventions.