Material Detail
Let the Shape Speak - Discriminative Face Alignment using Conjugate Priors
This video was recorded at British Machine Vision Conference (BMVC), Surrey 2012. This work presents a novel Bayesian formulation for aligning faces in unseen images. Our approach is closely related to Constrained Local Models (CLM) and Active Shape Models (ASM), where an ensemble of local feature detectors are constrained to lie within the subspace spanned by a Point Distribution Model (PDM). Fitting a model to an image typically involves two steps: a local search using a detector, obtaining response maps for each landmark (likelihood term) and a global optimization that finds the PDM parameters that jointly maximize all the detection responses. The global optimization can be seen as a Bayesian inference problem, where the posterior distribution of the PDM parameters (including pose) can be inferred in a maximum a posteriori (MAP) sense. Faces are nonrigid structures described by continuous dynamic transitions, so it is crucial to account for the underlying dynamics of the shape. We present a novel Bayesian global optimization strategy, where the prior is used to encode the dynamic transitions of the PDM parameters. Using recursive Bayesian estimation we model the prior distribution of the data as being Gaussian. The mean and covariance were assumed to be unknown and treated as random variables. This means that we estimate not only the mean and the covariance but also the probability distribution of the mean and the covariance (using conjugate priors). Extensive evaluations were performed on several standard datasets (IMM, BioID, XM2VTS and FGNET Talking Face) against state-ofthe- art methods while using the same local detectors. Finally, qualitative results taken from the challenging Labeled Faces in the Wild (LFW) dataset are also shown.
Quality
- User Rating
- Comments
- Learning Exercises
- Bookmark Collections
- Course ePortfolios
- Accessibility Info