IOS8 Core Image In Swift: automatically improves Image and built-in filter usage
Based on iOS SDK 8.0 and Xcode 6 Beta 6.
Core Image is a powerful framework. It allows you to simply apply various filters to process images, such as changing the brightness, color, or exposure. It uses GPU (or CPU) to process image data and video frames very quickly and even in real time. It also hides all the details of underlying graphics processing and can be used simply through the provided API. It does not need to worry about how OpenGL or OpenGL ES fully utilizes GPU capabilities, you do not need to know what role GCD plays in it. Core Image processes all the details.
The Core Image framework provides us with these things:
Built-in image filter feature detection capabilities (such as face recognition) Support automatic improvement of the image. Multiple filters can be combined into a custom filter.
Automatic image improvement
First, let's take a simple example. Use the Single View Application project template to create a project. After the project is created, only one AppDelegate and one ViewController are created, and another Main is created. storyboard, in Main. A ViewController has been prepared in storyboard. We place a UIImageView in this ViewController and adjust its frame:
In addition, because UIImageView is stretched by default, we don't want it to be deformed and set its ContentModeAspect Fit.
Finally, drag two buttons to display the source image, and one to automatically improve the image. The entire ViewController looks like this:
Next, add the corresponding IBAction method and IBOutlet attributes to ViewController:
Class ViewController: UIViewController {
@ IBOutlet var imageView: UIImageView!
Lazy var originalImage: UIImage = {
Return UIImage (named: Image)
}()
......
An imageView connected to the Storyboard also has an originalImage attribute that is only loaded once. This attribute will be used many times later. Here I will use the image for my entire project.
Then in ViewDidLoad, it is like this:
Override func viewDidLoad (){
Super. viewDidLoad ()
Self. imageView. layer. shadowOpacity = 0.8
Self. imageView. layer. shadowColor = UIColor. blackColor (). CGColor
Self. imageView. layer. shadowOffset = CGSize (width: 1, height: 1)
Self. imageView. image = originalImage
}
Only two things are done: one is to add a shadow border to the imageView to make it look good; the other is to assign the originalImage value to the imageView.
Display the source image with one line of code:
@ IBAction func showOriginalImage (){
Self. imageView. image = originalImage
}
Below is the code for automatic improvement. I will post it first, and then I will be more interested in an intuitive effect:
@ IBAction func autoAdjust (){
Var inputImage = CIImage (image: originalImage)
Let filters = inputImage. autoAdjustmentFilters () as [CIFilter]
For filter: CIFilter in filters {
Filter. setValue (inputImage, forKey: kCIInputImageKey)
InputImage = filter. outputImage
}
Self. imageView. image = UIImage (CIImage: inputImage)
}
After you connect IBAction and IBOutlet, you can see the following results after running:
Click "auto-improvement" to see the effect. The operation may be slow because we haven't done any optimization yet. If you think the improvement effect is not obvious, you can click the source image several times to compare it.
Although there are some problems, it does not prevent us from continuing to explore. The above automatic improvement Code explicitly uses two classes (why should I use the explicit term here ?) : CIImage and CIFilter, where:
CIImage: only the original data that can be used to build an image is saved as a model object. CIFilter: filters. Different CIFilter instances can indicate different filter effects, and different parameters can be set for different filters. In addition, the automatic improvement function of Core Image intelligently improves the histogram of images (histogram ?) To analyze the face area and metadata, you only need to input an Image as the input parameter to get a set of filters that can improve the Image. CIImage instances can be obtained through UIImage, and then through two APIs: autoAdjustmentFilters and autoadjustmentfilterswitexceptions: Get the filter array that can improve the image, in most cases, you may use the API that provides the Option dictionary, because you can set the direction of the image, which is especially important for CIRedEyeCorrection, CIFaceBalance, and other filters, because Core Image requires precise facial recognition. Do you only need to remove the red eye (set kCIImageAutoAdjustEnhance to false ). Whether to use all filters except the redeye removal (set kCIImageAutoAdjustRedEye to false ). If you want to provide the Option dictionary, you can use it as follows:
NSDictionary *options = @{ CIDetectorImageOrientation : [[image properties] valueForKey:kCGImagePropertyOrientation] };NSArray *adjustments = [myImage autoAdjustmentFiltersWithOptions:options];
In this example, I will not pass in the Option dictionary because it does not involve the image direction. Want to know which filters are used for the automatic improvement function? You only need to print the filter object. Generally, these five filters are used:
CIRedEyeCorrection: Fixed various red eyes caused by the camera flash.
CIFaceBalance: Adjust skin color
CIVibrance: Improves the image saturation without affecting the skin color.
CIToneCurve: Improve the image contrast
CIHighlightShadowAdjust: Improves shadow details. In most cases, these filters are sufficient. As mentioned earlier, different CIFilter have different parameters. If you want to know the specific parameters of CIFilter, you can call its own
InputKeysMethod to obtain a list of supported input parameters, or call its
OutputKeysMethod to obtain the list of its output parameters (generally we only use outpuntImage), or directly call its
AttributesMethod To obtain all its information, including its name, category, input parameter, output parameter, value range of each parameter, and default value. Call CIFilter's inputKeys method to view its input parameters:
For filter: CIFilter in filters {
Let inputKeys = filter. inputKeys ()
Print (filter. name ())
Println (inputKeys)
...
Print result:
CIFaceBalance[inputImage, inputOrigI, inputOrigQ, inputStrength, inputWarmth]CIVibrance[inputImage, inputAmount]CIToneCurve[inputImage, inputPoint0, inputPoint1, inputPoint2, inputPoint3, inputPoint4]CIHighlightShadowAdjust[inputImage, inputRadius, inputShadowAmount, inputHighlightAmount]
Almost all filters haveInputImageFor this input parameter, we can directly set parameters (such as kCIInputImageKey) with various keys preset by the system. The system has already preset most common keys. If you find that some key systems are not preset, you can directly use the string of the obtained key name as the key, for example:
Filter. setValue (inputImage, forKey: kCIInputImageKey)
// The two are set in the same way
Filter. setValue (inputImage, forKey: inputImage)
To automatically improve the image function, we don't need to know too much details, just set inputImage.
Then fill in the traps.
The above code has two problems: first, the image is obviously slow in every use of automatic improvement, and second, the image is deformed after automatic improvement. Comparison between the source image and the improved image:
I set the contentMode of UIImageViewAspect FitThat is, the image is scaled proportionally instead of deformation. If you set the background color of UIImageView to red, the image is displayed in red, the improved image is not red. Because Apple indicates that UIImage fully supports CIImage, the document does not point out the cause of this problem. I have referred to the following post:
Http://stackoverflow.com/questions/15878060/setting-uiimageview-content-mode-after-applying-a-cifilter
The UIImage obtained through the UIImage (CIImage :) method is not a CGImage-based standard UIImage, so it cannot be understood according to general display rules, therefore, we need to find a real UIImage in another way. The solution is described below.
We used the explicit word when we introduced CIImage and CIFilter earlier, because these two classes can be intuitively seen in the Code. CIImage provides image information and CIFilter provides filters, core Image also needs another object to bond the two. This object isCIContext.
CIContext is the key for Core Image to process images. It is similar to the CGContext of Core Graphics, but different from it, CIContext can be reused without having to create a new one each time, at the same time, there must be one output of CIImage. In the preceding example, CIContext is not used, but Core Image implicitly uses CIContext internally when UIImage (CIImage :) is called, that is, the work we need to do manually is automatically completed. However, there is a problem. Every time you call UIImage (CIImage :), it will re-create a CIContext object, which will not have a great impact when it is destroyed, however, when the filter is used repeatedly, the performance is very affected. To prevent this situation, we reuse the CIContext object so that ViewController can hold a lazy loading attribute:
Lazy var context: CIContext = {
Return CIContext (options: nil)
}()
CIContext requires a dictionary during initialization. You can use kCIContextUseSoftwareRenderer to create a CPU-based CIContext object. By default, the GPU-based CIContext object is created, the difference is that the GPU CIContext object can be processed faster, while the CPU-based CIContext object can be processed in the background in addition to larger images. We can upload nil to create a GPU-based CIContext object.
With reusable CIContext objects, you need to do this when creating a UIImage:
@ IBAction func autoAdjust (){
Var inputImage = CIImage (image: originalImage)
Let filters = inputImage. autoAdjustmentFilters () as [CIFilter]
For filter: CIFilter in filters {
Filter. setValue (inputImage, forKey: kCIInputImageKey)
InputImage = filter. outputImage
}
// Self. imageView. image = UIImage (CIImage: inputImage)
Let cgImage = context. createCGImage (inputImage, fromRect: inputImage. extent ())
Self. imageView. image = UIImage (CGImage: cgImage)
}
Although the first execution of automatic improvement will be a little slow (because the CIContext object needs to be created), the performance has improved a lot in the case of repeated execution, the second problem, ContentMode, is also solved. If there are no special cases, you should always use this method to create a CGImage and convert the CGImage to a UIImage.
Use various built-in filters to use CIFilter class methods
FilterNamesInCategory ()You can get all filters in a category:
Func showFiltersInConsole (){
Let filterNames = CIFilter. filterNamesInCategory (kCICategoryBuiltIn)
Println (filterNames. count)
Println (filterNames)
For filterName in filterNames {
Let filter = CIFilter (name: filterName as String)
Let attributes = filter. attributes ()
Println (attributes)
}
}
In this method, the input category parameter is
KCICategoryBuiltInTo output all filters of the iOS8 Core Image:
There are 127 types. Of course, not all filters are commonly used.
KCICategoryColorEffectThis key is used to obtain some common filters, just like those in the iOS 7 camera application. These filters generally do not need to be set with any parameters (some of them can also be set with different parameters ), you only need to set inputImage as in the same text. Print out some filters in this category:
I selected one of the filters. This is a monochrome filter. Although it looks like a lot of content, it is not complicated. CIAttributeFilterDisplayName is its display name, And inputImage is a parameter, the details of each parameter are displayed in a dictionary, including the CIAttributeTypeImage and the Class (CIImage) of the parameter, followed by the category of the filter, A filter can belong to multiple categories at the same time. Finally, it is the string that the filter needs to provide during instantiation, that is, ciphotow.tmono. The above ciphotopolictinstant and the following ciphotopolictnoir and ciphotopolictprocess are similar to each other. Only one inputImage parameter is required, isn't it easy? The parameters of these filters have been set internally and can be used directly. For ease of use, we give ViewController A CIFilter attribute, instantiate CIFilter in other methods, and use a public method to display the image after the filter:
Class ViewController: UIViewController {
@ IBOutlet var imageView: UIImageView!
Lazy var originalImage: UIImage = {
Return UIImage (named: Image)
}()
Lazy var context: CIContext = {
Return CIContext (options: nil)
}()
Var filter: CIFilter!
......
// MARK:-Nostalgia
@ IBAction func photoEffectInstant (){
Filter = CIFilter (name: ciphotow.tinstant)
OutputImage ()
}
// MARK:-black and white
@ IBAction func photoeach tnoir (){
Filter = CIFilter (name: ciphotow.tnoir)
OutputImage ()
}
// MARK:-color
@ IBAction func photo=ttonal (){
Filter = CIFilter (name: ciphotow.ttonal)
OutputImage ()
}
// MARK:-Years
@ IBAction func photoEffectTransfer (){
Filter = CIFilter (name: ciphotow.ttransfer)
OutputImage ()
}
// MARK:-monochrome
@ IBAction func photoEffectMono (){
Filter = CIFilter (name: ciphotow.tmono)
OutputImage ()
}
// MARK:-fade
@ IBAction func photo=tfade (){
Filter = CIFilter (name: ciphotow.tfade)
OutputImage ()
}
// MARK:-Printing
@ IBAction func photoshoptprocess (){
Filter = CIFilter (name: ciphotow.tprocess)
OutputImage ()
}
// MARK:-chromium yellow
@ IBAction func photoEffectChrome (){
Filter = CIFilter (name: ciphotow.tchrome)
OutputImage ()
}
Func outputImage (){
Println (filter)
Let inputImage = CIImage (image: originalImage)
Filter. setValue (inputImage, forKey: kCIInputImageKey)
Let outputImage = filter. outputImage
Let cgImage = context. createCGImage (outputImage, fromRect: outputImage. extent ())
Self. imageView. image = UIImage (CGImage: cgImage)
}
After writing these items, bind various buttons and touch events on the UI. The UI looks like this:
Various filter effects after running:
The above is a simple use of the Core Image built-in filter. If you do not need more fine-grained control over the filter, the above method will be enough.
GitHub
UPDATED
// MARK :-, Which corresponds to Objective-C's
# Pragma mark-Same role.