The 3D Active Shape Model Construction Using 3D Morphable Model to Generate Data

Get Complete Project Material File(s) Now! »

Literature Review about Automatic 2D Landmark Location

A lot of algorithms have been proposed for facial landmark location for 2D images. As suggested by Hamouz et al.(Hamouz 2005) [30], they can be classied in two categories: image-based and structure-based methods. In image-based methods, faces are treated as vectors in a large space and these vectors are furthermore transformed. The most popular transformations are Principal Components Analysis (PCA), Gabor Wavelets (Fasel 2002, Vukadinovic 2005) [24, 55, 70], Independent Components Analysis (Antonini 2003) [3], Discrete Cosine Transform (Salah 2006) [55], and Gaussian Derivative Filters (Arca 2006, Gourier 2004) [4, 26]. Through these transforms, the variability of facial features is captured, and machine learning approaches like boosted cascade detectors (Viola 2001) [69], Support Vector Machines (Chunhua 2008) [21] and Multi-layer Perceptions are used to learn the appearance of each landmark. Some examples of such methods are proposed by Viola and Jones [69], Jesorsky et al. [34], and Hamouz et al. [30]. Structure-based methods use prior knowledge about facial landmark positions, and constrain the landmark searching by heuristic rules that involve angles, distances, and areas. The face is represented by a complete model of appearance consisting of points and arcs connecting these points (Shakunaga 1998) [58]. For each point of this model, a feature vector is associated. Typical methods include Active Shape Models (ASM) (Cootes 1995, Ordas 2003) [19, 47], Active Appearance models (AAM) (Cootes 2004) [20], and Elastic Bunch Graph Matching (Wiskott 1997, Monzo 2008) [45, 73].
These methods are well suited for precise localization (Milborrow 2008) [44]. Within structure-based models, one outstanding approach is the Active Shape model (ASM) [20], because of its simplicity and robustness.

Reminder about the Original Active Shape Model (AS-M)

The original Active Shape Model (ASM) was introduced by Cootes et al. in 1995 [19]. It is a model-based approach in which the priori information of the class of objects to is encoded into a template. Such template is user-dened and allows the application of ASM to work on any class of objects, as long as they can be represented with a xed topology, such as faces.
The face template can be considered as a collection of contours, each contour being dened as the concatenation of certain key points dened in the shape analysis literature as landmarks, see Figure 2.1. The deformation of the landmarks allowed in the model template is learnt from a training database. As a result, an important property of ASM is that they are generative models. That is, once trained, ASM are able to reproduce samples observed in the training database and, additionally, they can generate new instances of an object not present in the database but consistent with the statistics learnt there from. In the original Active Shape Model (ASM) introduced by Cootes et al. in 1995 [19], there are two statistical models that exploit the global shape and the local texture prior knowledge in the segmentation process.That is Point Distribution Model (PDM) and Local Texture Model (LTM). The PDM represents the mean geometry of a shape and its statistical variations from the training set of shapes. While the LTM is used to describe the texture variations at each landmark position of the PDM.

Combined Active Shape Model (C-ASM) Based on Facial Internal Region Model and Facial Contour Model

One of the novity of our work is applying dierent feature descriptor for dierent landmarks on the faces.As shown above, using SIFT feature descriptor, we can nd correspondences between landmarks in two images that have small pose variability, even when the landmarks used to train the ASM are in 2D. The points in the face region that we denote as \internal » (such as eyes’ corners), could be considered as the perspective projection of 3D face on the image plan. While the contour points are dierent, and are more dependent on the 3D view point. In that case the SIFT descriptor dose not work when the acquisition angle of testing images is dierent from the training images.
Especially when those points are occulted because of minor head pose rotation, see Figure 2.6.So in our proposed approach, two models are used to represent the human face. One model represents the landmarks of what we call \internal region », including the landmarks on the eyes, nose, eyebrows and mouth. Those points could be considered as 3D position invariant during perspective projections. So we use the SIFT descriptor for this model, and we name it facial internal region model. The other one models the contour point on the face only. For those points using SIFT representations will result in wrong matches. The gradient of the prole is more suited for the contour points, so we use Grey-Level Prole to describe them, and we name it facial contour model.

READ  Groundwater-driven nutrient inputs to coastal lagoons: the relevance of lagoon water recirculation as a conveyor of dissolved nutrients

Table of contents :

1 Introduction 
1.1 Thesis Outline
2 Automatic 2D Facial Landmark Location with a Combined Active Shape Model and Its Application for 2D Face Recognition 
2.1 Introduction
2.2 Literature Review about Automatic 2D Landmark Location
2.3 Reminder about the Original Active Shape Model (ASM)
2.3.1 Point Distribution Model
2.3.2 Local Texture Model
2.3.3 Matching algorithm
2.4 A Combined Active Shape Model for Landmark Location
2.4.1 Using SIFT Feature Descriptor as Local Texture Model
2.4.2 Combined Active Shape Model (C-ASM) Based on Facial Internal Region Model and Facial Contour Model
2.5 Experiments for C-ASM Landmark Location Precision Evaluation
2.5.1 Experimental Protocol for Landmark Location Precision
2.5.2 Experimental Results for Landmark Location Precision
2.5.3 Experimental Discussion
2.6 Application for 2D Face Recognition
2.6.1 Fully Automatic Face Recognition with Global Features
2.6.2 Face Recognition Databases
2.6.3 Face Verication Experimental Results
2.7 Conclusions
3 Automatic 2D Facial Landmark Location using 3D Active Shape Model 
3.1 Introduction
3.2 Literature Review about Facial Landmark Location across Pose
3.3 3D Active Shape Model Construction
3.3.1 3D Point Distribution Model
3.3.2 3D view-based Local Texture Model
3.4 2D Landmark Location: Fitting the 3D Active Shape Model to 2D Images
3.4.1 Framework of Matching Algorithm
3.4.2 Shape and Pose Parameters Optimization
3.5 How to Synthesize Training Data from 3D Morphable Model
3.5.1 Reminder about 3D Morphable Model
3.5.2 The 3D Active Shape Model Construction Using 3D Morphable Model to Generate Data
3.6 Databases
3.6.1 Training Database for the 3D-ASM
3.6.2 Evaluation Databases
3.7 Experimental Setup and Results
3.7.1 Evaluation Using the Real Scans
3.7.2 Evaluation Using Randomly Generated 3D Faces
3.8 Discussion
4 Automatic 3D Face Reconstruction from 2D Images 
4.1 Introduction
4.2 Literature Review Related to 3D Face Reconstruction and Its Evaluation
4.2.1 3D Face Reconstruction
4.2.2 Automatic 2D Facial Landmark Location for 3D Face Reconstruction
4.2.3 Evaluation of the Quality of the 3D Face Reconstruction
4.3 Automatic 3D Face Reconstruction from Nonfrontal Face Images
4.3.1 Automatic 2D Face Landmark Location with 3D-ASM
4.3.2 3DMM Initialization Using Detected 2D Landmarks
4.3.3 3D Face Reconstruction by Model Fitting
4.4 Evaluation Method for 3D Face Reconstruction
4.5 Database and Experimental Results
4.5.1 Databases
4.5.2 Experimental Results
4.6 Inuence of View Point Change to the 3D Face Reconstruction Results
4.7 Inuence of Image Quality to the 3D Face Reconstruction Results
4.8 Inuence of Texture Mapping Strategies to the 3D Face Reconstruction Results
4.9 Conclusions
5 2D Face Recognition across Pose using 3D Morphable Model 
5.1 Introduction
5.2 Brief Literature Review about Face Recognition across Pose Problem
5.3 Background of Automatic 2D Face Reconstruction Across Pose
5.3.1 Experimental Protocol
5.4 ICP Distance of 3D Surfaces Based Measure
5.4.1 The ICP Distance Measure
5.4.2 Experimental Results of the ICP Distance Measure on a Subset of PIE Database
5.5 Face Identication with 3D Shape and Texture Parameters
5.5.1 Experimental Results of Face Identication with 3D Shape and Texture Parameters on subset of PIE database
5.6 Viewpoint Normalization Approach
5.6.1 Texture Extracted from Images or Synthesized Texture from 3DMM
5.6.2 Experimental Result
5.7 Conclusions
6 Conclusions and Future Work 
6.1 Achievements and Conclusion
6.2 Future work
Bibliography

GET THE COMPLETE PROJECT

Related Posts