Original: http://blog.csdn.net/dujian996099665/article/details/8886576
first, the original LBP algorithm
The basic idea of LBP is to sum the pixels of an image with the contrast of its local surrounding pixels. This pixel is used as the center to compare the threshold values of neighboring pixels. If the luminance of the center pixel is greater than or equal to his neighboring pixels, mark him as 1, otherwise Mark 0. You will use binary numbers to represent each pixel, such as 11001111. So, since the surrounding 8 pixels, you may end up getting 2^8 a possible combination, called the local two value pattern, sometimes called LBP code. The first LBP operator described in the literature actually uses the neighborhood of the 3*3
A more formal LBP operation can be defined as
This is the center pixel, the brightness is, and the brightness of the neighboring pixels. S is a symbolic function:
This method of description allows you to capture the details of the image very well. In fact, researchers can use it to get the most advanced level of texture classification. As the method described earlier has been proposed, the fixed neighbor region has failed to encode the scale change. Therefore, the extension method using a variable is described in the literature [AHP04]. The idea is to encode the nearest neighbor pixel with a circle of variable radii, so that you can capture the following neighbors:
For a given point, his nearest neighbor point can be computed as follows:
where r is the radius of the circle, and P is the number of sample points.
This operation is an extension of the original LBP operator, so it is sometimes called extended LBP (also known as circular LBP). If a point on the circle is not on the image coordinate, we use his interpolation point. Computer science has a bunch of clever interpolation methods, and OpenCV uses bilinear interpolation.
Two. Implementation of the original LBP algorithm
Attached code:
LBP.cpp : defines the entry point of the console application. // /*********************************************************** * OpenCV 2.4.4 test routines * Du Jianjian offers ********** / #include "stdafx.h" #include <opencv2/opencv.hpp> #include <cv.h> #include
three. Example results, LBP texture features
Original Image Lena.jpg
After transforming into grayscale:
LBP features for extracting pictures:
The LBP features of human face images were extracted.
four. precautions
12 functions can only be processed on grayscale images, so when using these two functions, the original image must be converted to grayscale image before
2 workaround for the LBP texture feature problem that shows only the image 1/3 or 1/4 area earlier:
This is because your input image is not a grayscale image, you need to convert the color map, multi-channel image into a single-channel image, and then as a parameter into the function, in order to get the full image of LBP texture features.
3 How to load grayscale images:
Set the second parameter of the function Cvloadimage () function and the second parameter of Imread () to: cv_load_image_anydepth | Cv_load_image_anycolor
v. References
Facerec Documents in opencv2.4.4
http://blog.csdn.net/guoming0000/article/details/8022197
Wait a minute.