Deep Learning paper notes--recover Canonical-view Faces in the Wild with deeper neural Network

Source: Internet
Author: User

Article Source: CVPR2014

Zhenyao zhu,ping Luo,xiaogang Wang,xiaoou Tang

(The Chinese University of Hong Kong is a cow, cvpr a lot of it.)

Main content:

It is proposed to use deep learning (or CNN) to reconstruct face image, then use reconstructed frontal face image to verification face, of course, can achieve higher accuracy rate (than not using the face to verification), It is suggested that using DL to learn the conversion from any face to canonical face can be considered as a regression problem (or not necessarily with DL method).

There are two types of methods available: 1. Consider generating a 3D face model; 2. Directly on the 2D image space (the method used in this article).

Main steps:

1. Front face selection basis, 2. Face face reconstruction.

1. Guidelines for the selection of positive faces: 1. Symmetry of left and right, 2. The rank rank,3 of the image, combined with 1 and 2 (the method used in the article).

So the measure formula used is as follows:

Where Yi is a face image, the p,q matrix is a parameter, the first one represents symmetry, the second represents the rank of the image, and Lamda represents the tradeoff of the two criteria.

In the article, the author only uses the smallest m value. (There may be a problem, or there is room for improvement, the author also said, you can use a linear combination, etc. to calculate the frontal face)

2. Face Reconstruction:

After the front face selection, it is possible to train as a deep learning network, with the following guidelines:

Where w is the parameter of the Deep neural network, Yi is the front face of choice.

The structure of the network is as follows:

It contains 3 convolution layers, of which the first two use Max pooling, the last one is full-connected, not sharing weights, the specific network structure with the classic is not much improvement.

The process of verification:

1. For each training image pair, reconstruct the front face, then extract 5 feature points landmark, then extract patches based on these landmark.

2. Using each patch pair to train the network, multiple features are finally cascaded together to form the final feature.

3. Using PCA to reduce and use SVM classification (two classification problem).

Experiment details: Instead of using the image in LFW to train the DL model, we use other Face Image Library celebfaces to train, and can get 96.45% accuracy in LFW.

Finally, look at their refactoring effect.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.