Video-based face recognition

In video based face recognition, the goal is to represent and recognize the faces in terms of their spatial structure and facial dynamics (spatio-temporal representations). This is motivated by neuropsychological studies which indicate that both fixed features and dynamic personal characteristics are used in the Human Visual System (HVS) to recognize faces, and facial dynamics do support face recognition especially in degraded viewing conditions such as poor illumination, low image resolution recognition from distance. However, most of the video based systems do not follow this goal as they ignore facial dynamics and only exploit the abundance of structural information in the different frames. For instance, some methods select the best frame then perform still image based techniques while others perform still image recognition on several frames and then combine the recognition results. In this research work, we aim to investigate the issue of combining both spatial and temporal information and propose a new approach that incorporate both sources of information.

 

Computer vision using LBP

lbp

Earlier, our group has proposed a simple and efficient texture Operator called Local Binary Patterns (LBP).  The LBP approach codifies and collects into a histogram the occurrences of micro-patterns such as: spots, edges, corners etc. Recently, the LBP representation has been successfully  used in different facial image analysis tasks such as: Face Detection, Face Recognition, Facial Feature Extraction, Facial Expression Recognition, Facial Key point representation, Gender Classification, Face Authentication etc.

 

 

Mobile vision

mobile

Recently I am specifically looking at the problem of face detection and recognition from low-resolution images. Such environment is often encountered in video surveillance and photographs taken by mobile devices, for instance. About the usefulness of face detection applications in mobile phones, one can enumerate several scenarios. For instance, since the popularity of digital cameras in mobile devices is increasing rapidly, the management of large image databases is becoming a major problem for a user. A significant part of the photographs is likely to include human faces. Therefore, there is a need to develop new technologies to facilitate the browsing and retrieval of images containing faces in mobile devices. Another potential application is the use of face detection as a first step of face verification process which can be used to lock/unlock the mobile phone. Therefore, instead of asking the user to enter his/her PIN code, the system can automatically take a photo of the user and compare it with the authorized face previously stored. Note that the application of face detection technology in mobile phones is not limited to the above examples.

 

 

Color image analysis

color

Obviously, the gray scale based methods to computer vision ( and specifically to face analysis) do not exploit the advantages of color cue which are the computational efficiency and robustness against some geometric changes such as rotation an scaling when the scene is observed under a uniform illumination field. On the other hand, the color based methods, rely strongly on the color cue which is sensitive to illumination changes (especially changes in the chromaticity of the illuminant source which are difficult to cancel-out).  My aim is to investigate the combination of the advantages of both categories of methods in order to inherits the speed from the color based methods and the efficiency from the gray scale based ones.

 

Manifold learning

manifold

Generally, the face images are treated as "single" or "isolated" patterns in the image space. However, they lie on a smooth manifold of a low-dimensional space.  I am studying different learning algorithms and new schemes in order to analyze the face manifold and exploit it for further analysis: detection, recognition, visualization etc. In this context and in an earlier work, I proposed a new scheme to extract face models from face sequences for building appearance based face recognition systems.

 

Smart environments

smart

The goal is to investigate the capabilities of machine vision in proactive computing and to develop solutions needed for building emerging applications. For instance, knowledge of whether a person is sitting, standing, or walking is likely to matter as well as how he is using his hands, or which way he is facing, and whether he is talking. Based on this kind of information the environment may adapt to the activities and provide desired responses to relevant events.

 

Soft biometrics.

Soft biometrics are the human characteristics that include information such as gender, eye color, ethnicity, age, height, length of arms and legs, proportion, gait and gestures. Those soft biometric traits are usually easier to capture from a distance and do not require cooperation from the subject. Soft biometric traits provide some information about the individual, but lack the distinctiveness and permanence to sufficiently differentiate any two individuals. The height can be estimated from a sequence of real-time images obtained when the user moves into the view of the camera while gender, ethnicity, and age could be derived from the facial image or video of the user…

 

Image and video descriptors

Feature (or descriptor) extraction from images and videos is a very crucial task in almost all computer vision systems. It consists of extracting characteristics describing important information in the images and videos. Different global (or holistic) methods such as Principal Component Analysis (PCA) have been widely studied and applied but lately local descriptors (such as LBP, SIFT and Gabor) have gained more attention due to their robustness to challenges such as pose and illumination changes.

Human-computer interaction

 

 

 

   
 
     
(c) Copyright 2010 Abdenour Hadid.