The practical meaning of eigenvalues and eigenvectors

Source: Internet
Author: User

After the face test mentioned before, extract the face and save it for training or recognition is used, the code to extract the face is as follows:

[HTML]  View plain  copy    print? Void getimagerect (Iplimage* orgimage, cvrect rectinimage, iplimage* imgrect, Double scale)    {       //Extract a piece of (rectinimage) sub-image from image Orgimage Imgrect        IplImage *result=imgRect;       cvrect  size;       size.x=rectInImage.x*scale;        size.y=rectinimage.y*scale;       size.width=rectInImage.width*scale;       size.height=rectInImage.height*scale;              //result=cvcreateimage ( size, orgImage->depth, orgImage-> nchannels );       //Extract sub-images from images         Cvsetimageroi (orgimage,size);       cvcopy (orgimage,resuLT);       cvresetimageroi (orgimage);  }  


Human face pretreatment

Now that you've got a face, you can use that face image for face recognition. However, you will lose at least 10% accuracy if you try to do face recognition directly from a common image.

In a face recognition system, it is very important to apply various preprocessing techniques to standardize the image that will be identified. Most face recognition algorithms are sensitive to lighting conditions, so if you train in a darkroom, you may not be identified in a bright room, and so on. This problem can be attributed to "lumination dependent", and there are many other examples, such as the face should also be in a very fixed position in the picture (such as the eye position for the same pixel coordinates), fixed size, rotation angle, hair and decoration, expression (laughter, anger, etc.), Light direction (left or up, etc.), which is why it is important to use a good image preprocessing filter before face recognition. You should also do something else, such as removing the extra pixels around the face (such as using an oval mask, showing only the inner face area instead of the hair or picture background as they change more than the face area).

For the sake of simplicity, the face recognition system I showed you is a feature face method that uses grayscale images. So I'll show you how to simply convert a color image into a grayscale image, and then simply use histogram equalization (histogram equalization) as an automatic way to standardize the brightness and contrast of facial images. To get better results, you can use color face recognition (color faces recognition,ideally with color histogram fitting in HSV or another color space instead of RGB), or use more preprocessing, such as edge enhancement, contour detection (contour detection), gesture detection (motion detection), and so on.
You can see an example of a preprocessing phase:



This is the basic code that transforms an image or grayscale image in RGB format into a grayscale image. It also adjusts the image to a fixed dimension, and then applies histogram equalization to achieve a fixed brightness and contrast.

PCA Principle

Now that you have a pre-processed face picture, you can use the feature face (PCA) for face recognition. OpenCV comes with the "cveigendecomposite ()" function that performs the PCA operation, but you need a picture database (training set) to tell the machine how to identify the person.

So you should collect a set of pre-treated facial images for each person to identify. For example, if you want to identify someone from a 10-person class, you can store 20 images for each person, with a total of 200 pre-processed face images of the same size (such as 100x100 pixels).

The theory of feature faces is explained in two articles of Servo magazine (face recognition with eigenface), but I will still try to explain to you here.

We use "PCA" to convert your 200 training images into a "feature face" set that represents the main difference between these training images. First it will generate an "average face picture" of these images by getting the average of each pixel. Then the feature face will be compared with the "average human face". The first feature face is the main facial difference, the second feature face is the second important facial difference, etc... Until you have about 50 feature faces representing the difference of most training set pictures.



In the above example pictures you can see the average face and the first and last feature faces. Notice that the average face shows a smooth facial structure of an ordinary person, with some of the most important facial features appearing in the front of the face, and the final feature face (e.g. Eigenface 119) is mostly image noise. You can see the top 32 feature faces below.



In a nutshell, the feature Face method (Principal Component analysis) calculates the main differences between the training-focused pictures and uses these "differences" to represent each training picture.
For example, a training picture might be composed of the following:

(averageface) + (13.5% of Eigenface0) – (34.3% of Eigenface1) + (4.7% of Eigenface2) + ... + (0.0% of eigenface199).
Once calculated, it can be thought that this training picture is these 200 ratios (ratio):

{13.5,-34.3, 4.7, ..., 0.0}.

With feature face images multiplied by these ratios, plus the average face picture (average faces), restoring this training picture from these 200 ratios is perfectly achievable. But since many of the features in the rear face are image noise or do not have much effect on the image, the ratio table can be reduced to only the main ones, such as the first 30, without significant impact on image quality. So now you can use 30 feature faces, average face pictures, and a table with 30 ratios to represent all 200 training pictures.

To identify a person in another image, you can apply the same PCA calculation, using the same 200 feature faces to find the ratio of 200 representing the input image. It is still possible to retain only the first 30 ratios and ignore the remaining ratios because they are secondary. Then by searching for these ratios of tables, look for 20 people known in the database, to see whose first 30 ratios are closest to the first 30 ratios of the input images. This is the basic method for finding the most similar training picture as the input image, which provides a total of 200 training pictures.

Training Pictures

Creating a Face Recognition database is the training of a text file that lists picture files and the people represented by each file, forming a facedata.xml "file.
For example, you can enter these into a text file named "Trainingphoto.txt":
Joke1.jpg
Joke2.jpg
Joke3.jpg
Joke4.jpg
Lily1.jpg
Lily2.jpg
Lily3.jpg
Lily4.jpg
It tells the program that the first person's name is "joke," while joke has four pre-processed facial images, and the second person's name is "Lily", with four pictures of her. This program can use the "Loadfaceimgarray ()" function to load these images into an array of images.

To create a database from these loaded images, you can use OpenCV's "cvcalceigenobjects ()" and "Cveigendecomposite ()" functions.

function to obtain the feature space:

[HTML] view plain copy print? void cvcalceigenobjects (int nobjects, void* input, void* output, int ioflags, int iobufsize, void* userdata,cvtermcriteri A * calclimit, iplimage* avg, float* eigvals)

Nobjects: The number of targets, that is, the number of input training pictures.
Input: Enter a picture of the training.
Output: Outputs feature face, total neigens
Ioflags, Iobufsize: default is 0
UserData: Pointer to a callback function (callback functions) that must be a data structure body.
Calclimit: Terminates the condition of the iteration to compute the target feature. Based on the parameters of the Calclimit, the calculation ends when the main feature target of the former neigens is extracted (this is a bit of a detour, it should be the extraction of the first neigens eigenvalues,), Another conclusion is that the ratio of eigenvalues to the most large eigenvalues is reduced below the epsilon value of Calclimit.
The assignment is as follows calclimit = Cvtermcriteria (Cv_termcrit_iter, Neigens, 1);
Its type is defined as follows:
typedef struct CVTERMCRITERIA
{
int type;    int max_iter; Maximum number of iterations
Double Epsilon; Accuracy of results
}
Avg: Average image of the training sample
Eigvals: A row vector pointer of a characteristic value in descending order. can be 0.

Finally, the resulting data is formed into a facedata.xml "file, which can be re-loaded at any time to identify the trained person.

Projection of an image in a feature space:

[HTML] view plain copy print? void Cveigendecomposite (iplimage* obj, int neigobjs, void* eiginput,int ioflags, void* userData, iplimage* avg, float* Co Effs);


OBJ: Input image, train or recognize image
NEIGOBJS: Eigen number of feature spaces
Eiginput: Feature face in feature space
Ioflags, UserData: default is 0
Avg: Average image in a feature space
Coeffs: This is the only output, the projection of the face in the subspace, the eigenvalues

Process of identification

1. Read the picture used for the test.

2. Average face, feature face, and eigenvalues (ratios) are loaded using the function "Loadtrainingdata ()" From the Face Recognition database file (the recognition fil) "Facedata.xml".

3. Using OpenCV's function "Cveigendecomposite ()", each input image is projected into the PCA subspace to see which feature face ratios are best suited to represent this image.

4. Now that the eigenvalues (ratio of feature face pictures) represent this input image, the program needs to find the original training picture and find the picture with the most similar ratio. These mathematical methods are performed in the "Findnearestneighbor ()" function, using "Euclidean distance (Euclidean Distance)", but it simply checks the similarity of the input image with each training picture, Find the most similar one: a picture that is closest to the input image in Euclidean space. As mentioned in the Servo magazine article, if you use a Markov distance (the Mahalanobis space, you need to define use_mahalanobis_distance in your code), you can get more accurate results.

5. The distance between the input picture and the most similar picture is used to determine the confidence level (confidence), as a guideline for identifying someone. The credibility of 1.0 means exactly the same, 0.0 or negative credibility means very different. But it's important to note that the confidence formula I used in the code is just a very basic confidence measurement, not very reliable, but I think most people would want to see a rough confidence value. You may find that it gives the wrong value to your picture, so you can disable it (for example: set the confidence to a constant of 1.0).

Once you instruct which training picture is the most similar to the input picture, and assume that the confidence value is not too low (should be at least 0.6 or higher), then it points out who the person is, in other words, it identifies the person.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.