This project represents a new way to look at the problem of human face recognition. Despite a large amount of research on this topic, we still do not understand the most fundamental aspect of face processing: how can we identify the people we see? This is a key problem in human perception, but it also has practical implications in forensic and security settings. This project has its roots in a simple observation: pictures of the same face can look very different indeed. In the standard approach to face recognition, this commonplace fact is treated as an inconvenience. Differences between pictures of the same person are regarded as noise, and either ignored, or eliminated by systematically controlling the images used for research. This research programme takes exactly the converse approach. Instead of trying to control away this variability, we wish to study it explicitly.
Under this approach, the focus is not how to tell people apart, but instead how to tell people together how to bring together superficially different images into a coherent representation. Early work suggests that a very important component of familiar face recognition is the ability to generalize over superficial image differences differences which tend to fool unfamiliar viewers, as well as automatic computer-based systems. The current failure to address this variability may account for the slow progress in face identification progress which has fallen behind the understanding of other aspects of face processing such as social perception. By studying this missing component of face recognition, a novel theoretical model will be developed which has the potential to make a significant contribution.