In recent years there has been significant progress in the automatic interpretation of
images of human faces. The applications of such techniques include: Access Control,
Behaviour Monitoring, Expression Recognition, Image Enhancement, Synthesis, Database
Indexing, and many others. Existing techniques tend to address each of these areas
individually, producing highly specific solutions to well-constrained problems. The aim of
the Faces Project in the Wolfson Unit is to develop a generic approach to all these
applications, providing a single, unified system capable of performing any face
interpretation task in a wide range of circumstances, from high-resolution stills to
low-resolution and noisy video images.
The difficulty in understanding faces comes from the large degree of variability possible
in images of any face. Our system addresses the interpretation problem by `learning' about
this variability from large sets of training data. In particular, the system must learn
about variation in:
Statistical techniques have allowed us to build a
photo-realistic model of faces, incorporating all this variation. (See the middle image at
the top of this page.) This model can not only deal with all types of variation, but
separate the variation into particular types, revealing the Identity, Expression,
Viewpoint and Lighting for any face image.
We have recently develop a novel method of locating deformable objects (such as faces) in
images. These are known as Active Appearance Models and are the subject of
ongoing research. We have applied this approach to face images and shown that, using the
model parameters for classification we can obtain good results for person identification
and expression recognition using a very difficult training and test set of still images.
We have also demonstrated how this method can be
used in the interpretation of video sequences. The aim is to improve recognition
performance by integrating evidence over many frames. A face appearance model can be
partitioned to give sets of parameters that independently vary identity, expression, pose
and lighting. We exploit this idea to obtain an estimate of identity which is independent
of other sources of variability and can be straightforwardly filtered to produce an
optimal estimate of identity. This leads to a stable estimate of ID, even in the presence
of considerable noise. This approach can be used to produce high-resolution visualisation
of poor quality sequences.
The `Faces Group' aim to bring together several of our recent advances in a real-time
demonstrator. This state-of-the-art system will interpret video sequences of faces and
even recognise faces where the picture quality is so low it is impossible for a human
observer to do so. There remain many interesting areas of investigation for new
- Models of Dynamic Behaviour
- Biometric Data Fusing (audio and video)
- Automatic Model Building
- Fast/Reliable Initial Detection
- Non-Linear/Multi-Part Models
- Synthetic Face Generation
Funding for this project is provided by the EPSRC and British Telecom PLC.
The Faces Group welcomes inquiries from any interested students. To speak to people
working on Faces projects, contact:
Tim Cootes: email@example.com
Louise Butcher: Louise.Butcher@man.ac.uk
Radiology Industrial liaison
Contact Us Join
Us Search Home
the Website Administrator
with comments or queries about this website.
Copyright © 1999 - 2002 The University of Manchester.
Page last updated:
27 November, 2002