Adaptive Skin Color Recognition and adaptive Skin Color

Source: Internet
Author: User

Adaptive Skin Color Recognition and adaptive Skin Color

Skin color recognition is an important topic of digital images. There are already many ways to solve this problem. There are many good methods, but almost all of them have their own defects, making it difficult to achieve perfection, after all, whether the recognition is successful or not depends on many factors.

I am doing skin color recognition based on the YUV space and YQI space adaptive illumination. Its principle is very simple. You can refer to the following materials:

Http://wenku.baidu.com/link? Url = m01RY0xYaraGnOmWVSSthhuGZq-yuC_JuvCq9JknxLRaTpLWV9X_KhrF2f4XmnkHHgY8HB0ADy-YKFcoijBxj3KyWU-9YnjqcYlEcYoJdlC

However, there is a problem in this document. When calculating the UV phase angle, it is defined:

Angle = arctan (| V |/| U |)

This does not seem to be correct, but I checked all kinds of information on the Internet and there was no result. However, according to my own reasoning, the calculation method should be as follows:

1. V> 0 & U> 0 // First quadrant

Angle = arctan (V/U) * 180/Pi;

2. V> 0 & U <0 // Second quadrant

Angle = 180-arctan (| V |/| U |) * 180/Pi;

3. V <0 & U <0 // third quadrant

Angle = 180 + arctan (| V |/| U |) * 180/Pi;

4. V <0 & U> 0 // quadrant

Angle = 360-arctan (| V |/| U |) * 180/Pi;

According to the mathematical knowledge I have learned, there is no such error. However, this is my personal reasoning. If there is a mistake, please point it out!


Another problem is his limited range. He thinks that the Angle value should be in [105,150], and the I value should be in [20, 80], but this is not ideal, there are many missed and wrong judgments, although this is inevitable! But it can still be improved. Here, my solution is to use OpenCV's face recognition to identify the face and then place it in a specific area (the part below the eyes and above the mouth ), after calculating the Angle value and the average I value

In this way, the recognition effect will be much better!


The following is the core part of the algorithm.


// SkinIdentify. h


typedef unsigned char byte;class SkinIdentify{public:SkinIdentify(void);virtual ~SkinIdentify(void);void Run(byte *pSrcR,byte *pSrcG,byte *pSrcB,int Height,int Width,byte *pDstRGBData,byte *pDstRGBData1,byte *pDstRGBData2,int AngleMin,int AngleMax,int IMin,int IMax,float *pfAngle,float *pfI);void RunAgain(int Height,int Width,byte *pDstRGBData,byte *pDstRGBData1,byte *pDstRGBData2,int AngleMin,int AngleMax,int IMin,int IMax,float *pfAngle,float *pfI);void CalAvgAI(byte *pSrcR,byte *pSrcG,byte *pSrcB,int Height,int Width,float &AvgA,float &AvgI);private:void GammaAdjust(byte *pGray,float * pGamma,int Height,int Width);};


// SkinIdentify. cpp


# Include "StdAfx. h" # include "SkinIdentify. h" # include <cmath> # define Pi 3.1416 SkinIdentify: SkinIdentify (void) {} SkinIdentify ::~ SkinIdentify (void) {} void SkinIdentify: Run (byte * pSrcR, byte * pSrcG, byte * pSrcB, int Height, int Width, byte * pDstRGBData, byte * pDstRGBData1, byte * pDstRGBData2, int AngleMin, int AngleMax, int IMin, int IMax, float * pfAngle, float * pfI) {// Gamma Correction float * pGammaR = new float [Height * Width]; float * pGammaG = new float [Height * Width]; float * pgammonoclonal = new float [Height * Width]; GammaAdjust (pSrcR, pGammaR, Height, Width); GammaAdjust (pSrcG, pGammaG, Height, Width); GammaAdjust (pSrcB, pgamab, height, Width); // determine the combination of YUV and YQI space float U, V, Angle, I; float * pR = pGammaR, * pG = pGammaG, * pB = pgamab; float Imin = IMin * 1.0, Imax = IMax * 1.0; float Amin = AngleMin * 1.0, Amax = AngleMax * 1.0; for (int I = 0; I <Height; I ++) {for (int j = 0; j <Width; j ++) {U = (-0.147) * pGammaR [j]-0.289 * pGammaG [j] + 0.436 * pgammonoclonal [j]; V = 0.615 * pGammaR [j]-0.515 * pGammaG [j]-0.100 * pgammonoclonal [j]; // calculate the phase Angle if (U = 0) Angle = 0; else Angle = atan (abs (V/U); if (V> 0 & U <0) Angle = 180-Angle * 180/Pi; else if (V <0 & U <0) Angle = 180 + Angle * 180/Pi; else if (V <0 & U> 0) angle = 360-Angle * 180/Pi; // calculate the I value. I = 0.596 * pGammaR [j]-0.274 * pGammaG [j]-0.322 * pgammonoclonal [j]; pfAngle [j] = Angle; // Save the Angle value so that you can directly call the RunAgain function pfI [j] = I when changing parameters; // same as above // Effect of YUV space if (Angle >=amin & Angle <= Amax) {pDstRGBData [j] = 1 ;} else {pDstRGBData [j] = 0 ;}// effect of the yqi space if (I >= Imin & I <= Imax) {pDstRGBData1 [j] = 1 ;} else {pDstRGBData1 [j] = 0 ;}// combined effect if (Angle >=amin & Angle <= Amax & I> = Imin & I <= Imax) {pDstRGBData2 [j] = 1;} else {pDstRGBData2 [j] = 0 ;}} pSrcR + = Width; pSrcG + = Width; pSrcB + = Width; pGammaR + = Width; pGammaG + = Width; pgamab + = Width; pfAngle + = Width; pfI + = Width; pDstRGBData + = Width; pDstRGBData1 + = Width; pDstRGBData2 + = Width ;} delete [] pR; // pGammaR = NULL; delete [] pG; // pGammaG = NULL; delete [] pB; // pgammonoclonal = NULL;} void SkinIdentify :: gammaAdjust (byte * pGray, float * pGamma, int Height, int Width) {float a = 0.5; float x0 = 80.0, x1 = 175.0, x, y, cosx, Gammax; for (int I = 0; I <Height; I ++) {for (int j = 0; j <Width; j ++) {x = (float) pGray [j]; if (x >=0.0 & x <= x0) {y = Pi * x/(2.0 * x0 );} else if (x> x0 & x <= x1) {y = Pi/2.0;} else {y = Pi-(Pi * (Limit 0-x )) /(2.0 * (255-x1);} cosx = cos (y); Gammax = 1.0 + a * cosx; pGamma [I * Width + j] = 255 * pow (x * 1.0)/255), 1.0/Gammax);} pGray + = Width ;}} void SkinIdentify :: runAgain (int Height, int Width, byte * pDstRGBData, byte * pDstRGBData1, byte * pDstRGBData2, int AngleMin, int AngleMax, int IMin, int IMax, float * pfAngle, float * pfI) {// to call this function, you must first call the Run function to obtain the phase and I values of the image. // The calculation method is almost the same as the Run function float Angle and I; float Imin = IMin * 1.0, Imax = IMax * 1.0; float Amin = AngleMin * 1.0, Amax = AngleMax * 1.0; for (int I = 0; I <Height; I ++) {for (int j = 0; j <Width; j ++) {Angle = pfAngle [j]; I = pfI [j]; if (Angle> = Amin & Angle <= Amax) {pDstRGBData [j] = 1;} else {pDstRGBData [j] = 0 ;} if (I> = Imin & I <= Imax) {pDstRGBData1 [j] = 1;} else {pDstRGBData1 [j] = 0 ;} if (Angle> = Amin & Angle <= Amax & I> = Imin & I <= Imax) {pDstRGBData2 [j] = 1 ;} else {pDstRGBData2 [j] = 0 ;}} pfAngle + = Width; pfI + = Width; pDstRGBData + = Width; pDstRGBData1 + = Width; pDstRGBData2 + = Width ;}} void SkinIdentify: CalAvgAI (byte * pSrcR, byte * pSrcG, byte * pSrcB, int Height, int Width, float & AvgA, float & AvgI) {// This function is used to calculate the average phase angle value and I value for the area locked by face recognition. // The calculation method is almost the same as the Run function. float * pGammaR = new float [Height * Width]; float * pGammaG = new float [Height * Width]; float * pgammonoclonal = new float [Height * Width]; GammaAdjust (pSrcR, pGammaR, Height, Width); GammaAdjust (pSrcG, pGammaG, Height, Width); GammaAdjust (pSrcB, pgammonoclonal, Height, Width); float U, V, Angle, I; float * pR = pGammaR, * pG = pGammaG, * pB = pgammonoclonal; for (int I = 0; I <Height; I ++) {for (int j = 0; j <Width; j ++) {U = (-0.147) * pGammaR [j]-0.289 * pGammaG [j] + 0.436 * pgammonoclonal [j]; V = 0.615 * pGammaR [j]-0.515 * pGammaG [j]-0.100 * pgammonoclonal [j]; if (U = 0) Angle = 0; else Angle = atan (abs (V/U); if (V> 0 & U <0) Angle = 180-Angle * 180/Pi; else if (V <0 & U <0) Angle = 180 + Angle * 180/Pi; else if (V <0 & U> 0) angle = 360-Angle * 180/Pi; I = 0.596 * pGammaR [j]-0.274 * pGammaG [j]-0.322 * pgammonoclonal [j]; AvgI + = I; avgA + = Angle; if (Angle <1) int a = 1;} pGammaR + = Width; pGammaG + = Width; pgamab + = Width ;} avgA/= (Height * Width); AvgI/= (Height * Width); delete [] pR; delete [] pG; delete [] pB ;}

I will not attach the visual interface implemented by MFC and the face recognition part implemented by OpenCV. If you are interested, please contact me!


Below are the recognition results

// Source Image

// YUV Space Effect


Effect of YQI Space


// Combined effect



In fact, the effect of this image is not good, but we can also see the respective defects of YUV and YQI. YUV cannot identify the brown and black colors, and YQI cannot identify the reddish colors, but the combination is still good!


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.