IOS face recognition (detection)
CoreImage of iOS has built in the face detection interface, and the detection accuracy is average, especially for the side face, basically cannot be detected. However, compared with other similar products, it is also comparable. Easy to use:
CIImage * image = [CIImage imageWithCGImage: aImage. CGImage]; NSDictionary * opts = [NSDictionary dictionaryWithObject: Rule forKey: role]; CIDetector * detector = [CIDetector detectorOfType: Condition context: nil options: opts]; // obtain the facial data NSArray * features = [detector featuresInImage: image];
The final features is all the face data detected. You can use the following method to calculate the position:
For (CIFaceFeature * f in features) {CGRect aRect = f. bounds; NSLog (@ "% f, % f", aRect. origin. x, aRect. origin. y, aRect. size. width, aRect. size. height); // if (f. hasLeftEyePosition) NSLog (@ "Left eye % g \ n", f. leftEyePosition. x, f. leftEyePosition. y); if (f. hasRightEyePosition) NSLog (@ "Right eye % g \ n", f. rightEyePosition. x, f. rightEyePosition. y); if (f. hasMouthPosition) NSLog (@ "Mouth % g \ n", f. mouthPosition. x, f. mouthPosition. y );}
Note that the detected position is the coordinate of the face data on the image (on uiimage, not on uiimageview). If you need to draw a range on the view, you need to perform Coordinate Transformation (the opposite of the Y axis), and pay attention to the scaling of the image in the view.