IOS8 Core Image In Swift: Face Detection and Mosaic

Source: Internet
Author: User

IOS8 Core Image In Swift: Face Detection and Mosaic

IOS8 Core Image In Swift: automatically improves Image and built-in filter usage

IOS8 Core Image In Swift: more complex Filters

IOS8 Core Image In Swift: Face Detection and Mosaic


Core Image not only has many built-in filters, but also can detect faces in images. However, Core Image is onlyDetection, NotRecognitionTo detect a face is to find a region in the image that meets the facial features (as long as it is a personal face), and to identify a specific face (such as a specific face) in the image ). After finding a region that matches a facial feature, Core Image returns information about the feature, such as the range of the face, the location of the eyes and mouth.



Perform the following steps before detecting and marking the detected area: create a Single View Application project and put UIImageView In the Storyboard. Set ContentMode Aspect FitConnect the UIImageView to the VC and put a UIButton named "Face Detection", and then connect FaceDetectingMethod to disable the frame and vc ui of Auto Layout and Size ClassesUIImageView as follows:

The following are the images used in the project. Click the image to display the source image:


Then, add the following basic attributes to the VC: originalImage and context (the Core Image framework cannot bypass ).

Class ViewController: UIViewController { <喎?http: www.bkjia.com kf ware vc " target="_blank" class="keylink"> Authorization + PHA + ICAgICAgICByZXR1cm4gVUlJbWFnZShuYW1lZDog "Image ")

}()

Lazy var context: CIContext = {

Return CIContext (options: nil)

}()

......

Show originalImage in viewDidLoad:

Override func viewDidLoad (){

Super. viewDidLoad ()

// Do any additional setup after loading the view, typically from a nib.

Self. imageView. image = originalImage

}

Then you can prepare to implement the faceDetecting method. In the Core Image framework, CIDetectorThe object provides the image detection function. You only need to use several APIs to initialize the CIDetector and obtain the detection result:


@ IBAction func faceDetecing (){

Let inputImage = CIImage (image: originalImage)

Let detector = CIDetector (ofType: CIDetectorTypeFace,

Context: context,

Options: [cidetect+curacy: cidetect+curacyhigh])

Var faceFeatures: [CIFaceFeature]!

If let orientation: AnyObject = inputImage. properties ()? [KCGImagePropertyOrientation] {

FaceFeatures = detector. featuresInImage (inputImage,

Options: [CIDetectorImageOrientation: orientation]

) As [CIFaceFeature]

} Else {

FaceFeatures = detector. featuresInImage (inputImage) as [CIFaceFeature]

}


Println (faceFeatures)

......

When using kCGImagePropertyOrientation, you may need to importImageIOFramework
OriginalImage and context are obtained through lazy loading. When creating a CIDetector object, you must tell it the content to be checked. CIDetectorTypeFaceBesides CIDetectorTypeFace, CIDetector can detect two-dimensional codes. Then, a context is passed, and multiple cidetectors can share one context object. The third parameter is a dictionary. We can specify the detection accuracy, in addition to CIDetectorAccuracyHigh, there are also CIDetectorAccuracyLow. High Accuracy will lead to higher recognition, but the recognition speed will be slower. After the CIDetector is created, the CIImage to be recognized is passed to it. Here, I have determined whether the CIImage has metadata in the direction. If yes, call FeaturesInImage: optionsThis method is called because the direction is crucial to the CIDetector and directly leads to the success of the recognition. Some images do not have the direction of the metadata. FeaturesInImageMethod, because the figure in the Big Bang shows no direction metadata, It is the featuresInImage method executed, but the former should be used in most cases. The returned value of the featuresInImage method is CIFaceFeatureArray, CIFaceFeature contains the range of the face, the left and right eyes, and the location of the mouth. We can use bounds to mark the range of the face. We can easily write this code: Get all facial features and use bounds to instantiate a UIView to display the View as follows:

@ IBAction func faceDetecing (){

Let inputImage = CIImage (image: originalImage)

Let detector = CIDetector (ofType: CIDetectorTypeFace,

Context: context,

Options: [cidetect+curacy: cidetect+curacyhigh])

Var faceFeatures: [CIFaceFeature]!

If let orientation: AnyObject = inputImage. properties ()? [KCGImagePropertyOrientation] {

FaceFeatures = detector. featuresInImage (inputImage, options: [CIDetectorImageOrientation: orientation]) as [CIFaceFeature]

} Else {

FaceFeatures = detector. featuresInImage (inputImage) as [CIFaceFeature]

}

Println (faceFeatures)


For faceFeature in faceFeatures {

Let faceView = UIView (frame: faceFeature. bounds)

FaceView. layer. borderColor = UIColor. orangeColor (). CGColor

FaceView. layer. borderWidth = 2

ImageView. addSubview (faceView)

}

}

Can this write? If you run it, you will get the following results:
This is because of our InputImageIs actually used OriginalImageThe actual size of my originalImage is much larger than the actual size:
Its width is actually 600 pixels. I name it @ 2x and it actually shows 300 pixels, and in imageView Aspect FitMode Display (imageViwe is 300 pixels in width). It is scaled during display, but it is complete in memory. In addition, the CIImage coordinate system and the UIView coordinate system are different, the coordinate system of CIImage is like a mathematical coordinate system. The origin point is in the UIView. This is because the Core Image and Core Graphics frameworks are derived from Mac OS X, the coordinate system on Mac OS X has been in existence for many years. iOS has introduced these frameworks directly, which solves the underlying compatibility problem between Cocoa apps and iOS apps, however, you can only solve the problem on the upper layer. So it is actually like this:
We need to do two steps: Adjust the transform, let it zoom in and out, let it adapt to the imageView, and then easily write the code again:

@ IBAction func faceDetecing (){

Let inputImage = CIImage (image: originalImage)

Let detector = CIDetector (ofType: CIDetectorTypeFace,

Context: context,

Options: [cidetect+curacy: cidetect+curacyhigh])

Var faceFeatures: [CIFaceFeature]!

If let orientation: AnyObject = inputImage. properties ()? [KCGImagePropertyOrientation] {

FaceFeatures = detector. featuresInImage (inputImage, options: [CIDetectorImageOrientation: orientation]) as [CIFaceFeature]

} Else {

FaceFeatures = detector. featuresInImage (inputImage) as [CIFaceFeature]

}

Println (faceFeatures)

// 1.

Let inputImageSize = inputImage. extent (). size

Var transform = CGAffineTransformIdentity

Transform = CGAffineTransformScale (transform, 1,-1)

Transform = CGAffineTransformTranslate (transform, 0,-inputImageSize. height)


For faceFeature in faceFeatures {

Var faceViewBounds = CGRectApplyAffineTransform (faceFeature. bounds, transform)

// 2.

Let scaleTransform = CGAffineTransformMakeScale (0.5, 0.5)

FaceViewBounds = CGRectApplyAffineTransform (faceViewBounds, scaleTransform)

Let faceView = UIView (frame: faceViewBounds)

FaceView. layer. borderColor = UIColor. orangeColor (). CGColor

FaceView. layer. borderWidth = 2

ImageView. addSubview (faceView)

}

}

Now it seems that there is no problem. In the first step, we placed a tranform for adjusting the coordinate system, the bounds is scaled in step 2 (equivalent to multiplying all x, y, width, and height by 0.5), because we know that the actual scale is 0.5 (the source image is 600 pixels, the imageView width is 300 pixels), and the write speed is 0.5, but a little offset is displayed after running:
This is because I put the imageView Set ContentMode to Aspect FitResult:

Generally, we do not stretch the photo, and usually adapt to the width and height. Therefore, we also need to process Aspect Fit. The above code is modified as follows:

@ IBAction func faceDetecing (){

Let inputImage = CIImage (image: originalImage)

Let detector = CIDetector (ofType: CIDetectorTypeFace,

Context: context,

Options: [cidetect+curacy: cidetect+curacyhigh])

Var faceFeatures: [CIFaceFeature]!

If let orientation: AnyObject = inputImage. properties ()? [KCGImagePropertyOrientation] {

FaceFeatures = detector. featuresInImage (inputImage, options: [CIDetectorImageOrientation: orientation]) as [CIFaceFeature]

} Else {

FaceFeatures = detector. featuresInImage (inputImage) as [CIFaceFeature]

}

Println (faceFeatures)

// 1.

Let inputImageSize = inputImage. extent (). size

Var transform = CGAffineTransformIdentity

Transform = CGAffineTransformScale (transform, 1,-1)

Transform = CGAffineTransformTranslate (transform, 0,-inputImageSize. height)


For faceFeature in faceFeatures {

Var faceViewBounds = CGRectApplyAffineTransform (faceFeature. bounds, transform)

// 2.

Var scale = min (imageView. bounds. size. width/inputImageSize. width,

ImageView. bounds. size. height/inputImageSize. height)

Var offsetX = (imageView. bounds. size. width-inputImageSize. width * scale)/2

Var offsetY = (imageView. bounds. size. height-inputImageSize. height * scale)/2

FaceViewBounds = CGRectApplyAffineTransform (faceViewBounds, CGAffineTransformMakeScale (scale, scale ))

FaceViewBounds. origin. x + = offsetX

FaceViewBounds. origin. y + = offsetY

Let faceView = UIView (frame: faceViewBounds)

FaceView. layer. borderColor = UIColor. orangeColor (). CGColor

FaceView. layer. borderWidth = 2

ImageView. addSubview (faceView)

}

}

In step 2, in addition to calculating scale through width and height ratio, the offset of x and Y axes is also calculated, to ensure that the function works properly when the width or height is scaled (the last step is divided by 2 because the center is displayed during scaling, and both the top and bottom sides are half done ). Compile and run at different heights:

After the face mosaic detects the face, we can do some interesting operations, such as mosaic:
This is an official image of Apple. It shows how to mosaic all the faces in a photo: based on the original image, create an image mosaic all parts to create a mask image for the detected face. Use a mask image, to mix a completely mosaic image with the source image, we add a button named "Mosaic" on the VC to connect its event to PixellatedMethod, and then start to achieve the mosaic effect. The procedure is as follows:
Create a full Mosaic Graph CIPixellateFilter parameter settings: Set inputImage to the source image. You can set the inputScale parameter as needed. The inputScale value ranges from 1 to 100. The larger the value, the larger the mosaic value:

To create a mask image for the detected face, use the CIDetector to detect the face and then use CIRadialGradientFilter to create a circle surrounded by faces CISourceOverCompositingFilters combine various masks (several faces actually have several masks:

Mixed mosaic, mask, and source Image CIBlendWithMaskFilter to mix the three, the parameter settings are as follows: Set inputImage for Mosaic Graph Set inputBackground for The Source image set inputMaskImage as the mask image complete implementation code is as follows:

@ IBAction func pixellated (){

// 1.

Var filter = CIFilter (name: "CIPixellate ")

Println (filter. attributes ())

Let inputImage = CIImage (image: originalImage)

Filter. setValue (inputImage, forKey: kCIInputImageKey)

// Filter. setValue (max (inputImage. extent (). size. width, inputImage. extent (). size. height)/60, forKey: kCIInputScaleKey)

Let fullPixellatedImage = filter. outputImage

// Let cgImage = context. createCGImage (fullPixellatedImage, fromRect: fullPixellatedImage. extent ())

// ImageView. image = UIImage (CGImage: cgImage)

// 2.

Let detector = CIDetector (ofType: CIDetectorTypeFace,

Context: context,

Options: nil)

Let faceFeatures = detector. featuresInImage (inputImage)

// 3.

Var maskImage: CIImage!

For faceFeature in faceFeatures {

Println (faceFeature. bounds)

// 4.

Let centerX = faceFeature. bounds. origin. x + faceFeature. bounds. size. width/2

Let centerY = faceFeature. bounds. origin. y + faceFeature. bounds. size. height/2

Let radius = min (faceFeature. bounds. size. width, faceFeature. bounds. size. height)

Let radialGradient = CIFilter (name: "CIRadialGradient ",

WithInputParameters :[

"InputRadius0": radius,

"InputRadius1": radius + 1,

"InputColor0": CIColor (red: 0, green: 1, blue: 0, alpha: 1 ),

"InputColor1": CIColor (red: 0, green: 0, blue: 0, alpha: 0 ),

KCIInputCenterKey: CIVector (x: centerX, y: centerY)

])

Println (radialGradient. attributes ())

// 5.

Let radialGradientOutputImage = radialGradient. outputImage. imageByCroppingToRect (inputImage. extent ())

If maskImage = nil {

MaskImage = radialGradientOutputImage

} Else {

Println (radialGradientOutputImage)

MaskImage = CIFilter (name: "CISourceOverCompositing ",

WithInputParameters :[

KCIInputImageKey: radialGradientOutputImage,

KCIInputBackgroundImageKey: maskImage

]). OutputImage

}

}

// 6.

Let blendFilter = CIFilter (name: "CIBlendWithMask ")

BlendFilter. setValue (fullPixellatedImage, forKey: kCIInputImageKey)

BlendFilter. setValue (inputImage, forKey: kCIInputBackgroundImageKey)

BlendFilter. setValue (maskImage, forKey: kCIInputMaskImageKey)

// 7.

Let blendOutputImage = blendFilter. outputImage

Let blendCGImage = context. createCGImage (blendOutputImage, fromRect: blendOutputImage. extent ())

ImageView. image = UIImage (CGImage: blendCGImage)

}

I divided it into seven parts in detail: Use the CIPixellate filter to perform a complete mosaic detection on the source image, and save it in faceFeatures to initialize the mask, and start to traverse all faces detected. Because we need to create a mask for each face based on the position of the face, we need to calculate the center point of the face first, corresponding to the x and Y axes coordinates, and then give a radius Based on the width or height of the face. Finally, use these calculation results to initialize a CIRadialGradient filter (I assign the alpha value of inputColor1 to 0, set these color values to transparent because I don't care about colors other than masks. This is the same as the example on the official website of Apple. Apple assigns this value to 1) since the CIRadialGradient filter creates an infinite image, you must crop it before using it (in the example on the Apple official website, it is not cropped ..), Then combine the masks of each face. CIBlendWithMaskFilters are used to mix mosaic, source, and mask graphs and display the running effect on the interface:
A simple mosaic processing example is complete.

GitHub I will keep updated on GitHub.
UPDATED: Careful friends will find that the mosaic area is larger than the detected area:
This is because the zoom factor is not taken into account when calculating the mosaic radius. You only need to calculate the scale first, and then multiply the scale with the current radius to obtain the exact range. Computing scale:

Var scale = min (imageView. bounds. size. width/inputImage. extent (). size. width,

ImageView. bounds. size. height/inputImage. extent (). size. height)

Modify radius:

Let radius = min (faceFeature. bounds. size. width, faceFeature. bounds. size. height) * scale

Corrected mosaic effect and face detection effect: html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.