IOS filter Image Processing
Abstract:This article describes how to use the OS X and iOS Image processing frameworks Core Image. It is very suitable for beginners to learn how to create and use iOS built-in filters through Core Image. Although the sample code is an iOS program written in Swift, the implementation concepts are easily converted to Objective-C and OS X.
This article will introduce the Core Image, an OS X and iOS Image processing framework for beginners.
If you want to learn the code in this article, you can download the sample project on GitHub. The sample project is an iOS app that lists a large number of Image filters provided by the system for selection, and provides a user interface to adjust parameters and observe the effect.
Although the sample code is an iOS program written in Swift, the implementation concepts are easily converted to Objective-C and OS X.
Basic Concepts
Speaking of Core Image, we need to first introduce several basic concepts.
A filter is an object that has many inputs and outputs and performs some transformations. For example, a blur filter may require an input image and a blur radius to generate an appropriate blur output image.
A filter chart is a filter network (non-loop directed graph) linked together, so that the output of one filter can be the input of another filter. In this way, you can achieve a well-crafted effect. Next we will see how to connect the filter to create a vintage photo effect.
Familiar with Core Image API
With these concepts, we can start to explore the Image filter details of the Core Image.
Core Image Architecture
Core Image has a plug-in architecture, which means that it allows users to write custom filters and integrate them with the filters provided by the system to expand their functions. Core Image scalability is not used in this article; I mentioned it only because it affects the framework API.
Core Image is used to maximize the use of the hardware on which it runs. The actual implementation of each filter, that is, the kernel, is written by a subset of GLSL (that is, OpenGL's coloring language. When multiple filters are connected into a filter chart, Core Image Concatenates the kernel to build an efficient program that can run on the GPU.
As long as possible, Core Image will delay work. Normally, no allocation or processing will occur until the output of the last filter in the filter chart is requested.
To complete the work, Core Image needs an object called context. This context is where the framework really works. It needs to allocate the necessary memory and compile and run the filter kernel to execute image processing. Creating a context is very expensive, so you often want to create a context that is used repeatedly. Next we will see how to create a context.
Query available filters
The Core Image filter is created by name. To get a list of system filters, we need to request the filter name from the kCICategoryBuiltIn category of the Core Image:
LetfilterNames = CIFilter. filterNamesInCategory (kCICategoryBuiltIn) as [String]
The list of available filters on iOS is very similar to a subset of available filters on OS X. There are 169 built-in filters on OS X and 127 built-in filters on iOS.
Create a filter by name
Now that we have a list of available filters, we can create and use filters. For example, to create a Gaussian blur filter, we can pass the corresponding name to the CIFilter initialization method:
LetblurFilter = CIFilter (named: "CIGaussianBlur ")
Set filter parameters
Because of the Core Image plug-in structure, most filter attributes are not directly set, but are set through key-value encoding (KVC. For example, to set the Blur radius of the blur filter, we use KVC to set the inputRadius attribute:
BlurFilter. setValue (10.0 forKey: "inputRadius ")
Because this method requires AnyObject? (Id in Objective-C) as its parameter value, it is not type-safe. Therefore, you need to be cautious when setting filter parameters to ensure that the type of values you pass is correct.
Filter attributes Query
To know the input and output parameters provided by a filter, we can obtain the inputKeys and outputKeys arrays respectively. Both return the NSString array.
To obtain detailed information about each parameter, we can look at the attributes dictionary provided by the filter. Each input and output parameter name is mapped to its own dictionary to describe what kind of parameter it is. If it is used, the maximum and minimum values are given. For example, the following is the inputBrightness parameter dictionary corresponding to the CIColorControls filter:
InputBrightness = {CIAttributeClass = NSNumber; CIAttributeDefault = 0; CIAttributeIdentity = 0; CIAttributeMin =-1; region = 1; CIAttributeSliderMin =-1; CIAttributeType = region ;};
For numeric parameters, the dictionary contains the kCIAttributeSliderMin and kCIAttributeSliderMax keys to limit the expected input fields. Most parameters also contain a kCIAttributeDefault keyword that maps to the default value of this parameter.
Image Filter practices
An image filter consists of three parts: Creating and configuring a filter chart, sending an image waiting for filter processing, and obtaining the image after filter processing. The following sections describe this in detail.
Create a filter chart
Creating a filter chart consists of these parts: instantiate the filters we need, set their parameters, and connect them so that the image data passes through each filter in order.
In this section, we will create a filter chart to create a photo style image of the 19th century tin edition. We link two effects to achieve this effect: desaturation and color-adjusted black and white filters at the same time, and a dark-angle filter to create a framed image with a shadow effect.
Using Quartz Composer to prototype the Core Image filter chart is very useful and can be downloaded from the apple developer website. Next, we sorted out the required photo filters, and stringed the black and white filters and the dark-angle filters together:
Once we achieve our satisfactory results, we can re-create a filter chart in the Code:
LetsepiaColor = CIColor (red: 0.76, green: 0.65, blue: 0.54) letmonochromeFilter = CIFilter (name: "CIColorMonochrome", withInputParameters: ["inputColor": sepiaColor, "inputIntensity ": 1.0]) monochromeFilter. setValue (inputImage, forKey: "inputImage") letvignetteFilter = CIFilter (name: "CIVignette", withInputParameters: ["inputRadius": 1.75, "weight": 1.0]) vignetteFilter. setValue (monochromeFilter. outputImage, forKey: "inputImage") letoutputImage = vignetteFilter. outputImage
Note that the output image of the black and white filter changes to the input image of the dark-angle filter. This will cause the effect to be applied to black and white images. Note that you can specify parameters in initialization, instead of using KVC to set them separately.
Create input image
The Core Image filter requires that the input Image be of the CIImage type. For iOS programmers, this may be a bit unusual, because they are more accustomed to using UIImage, but the difference is worthwhile. A CIImage instance is actually more comprehensive than a UIImage, because CIImage can be infinitely large. Of course, we cannot store infinite images in memory, but in terms of concept, this means that you can obtain image data from any area on the 2D plane and get a meaningful result.
All the images we use in this article are limited, and it is also easy to create a CIImage from a UIImage. In fact, only one line of code is required:
LetinputImage = CIImage (image: uiImage)
There are also convenient initialization methods to directly create CIImage from the image data or File URL.
Once we have a CIImage, we can set it to the input image of the filter by setting the inputImage parameter of the filter:
Filter. setValue (inputImage, forKey: "inputImage ")
Get a filter processed image.
Filters all have an attribute named outputImage. As you may have guessed, it is of the CIImage type. Then, how do we implement a reverse operation like creating a UIImage from a CIImage? Now, although we have spent all our time creating a filter chart, it is time to call the power of CIContext to actually implement image filter processing.
The simplest way to create a context is to upload an nil dictionary to its constructor:
LetciContext = CIContext (options: nil)
To obtain an image processed by a filter, we need CIContext to create a CGImage from a rectangle of the output image and input the image range (bounds ):
LetcgImage = ciContext. createCGImage (filter. outputImage, fromRect: inputImage. extent ())
The reason we use the input image size is that the output image usually has a different size ratio than the input image. For example, a blurred image has some extra pixels surrounding its border because the sampling exceeds the edge of the input image.
Now we can create a UIImage from the newly created CGImage:
LetuiImage = UIImage (CGImage: cgImage)
It is also possible to create a UIImage directly from a CIImage, but this method is a bit depressing: if you try to display such an image on a UIImageView, its contentMode attribute will be ignored. Using a transitional CGImage requires an additional step, but this can be avoided.
Use OpenGL to improve performance
It is time-consuming and waste to use the CPU to draw a CGImage. It only returns the result to the UIKit for merging. We prefer to be able to draw the image after applying the filter on the screen without having to go around the Core Graphics. Fortunately, due to the interoperability between OpenGL and Core Image, we can do this.
To share resources between OpenGL context and Core Image context, we need to create our CIContext in a slightly different way:
LeteaglContext = EAGLContext (API:. OpenGLES2) letciContext = CIContext (EAGLContext: context)
Here, an EAGLContext is created using the OpenGL ES 2.0 feature set. This GL context can be used as the backend context of a GLKView or used to draw a CAEAGLLayer. The sample code uses this technique to effectively draw images.
When a CIContext has a context associated with GL, the image processed by the filter can be drawn using OpenGL, as shown below:
CiContext. drawImage (filter. outputImage, inRect: outputBounds, fromRect: inputBounds)
As before, the fromRect parameter is a part of the image drawn using the coordinate space of the image processed by the filter. This inRect parameter is the rectangle of the coordinate space of the GL context applied to the image to be drawn. If you want to maintain the aspect ratio of an image, you may need to make some mathematical calculations to get the appropriate inRect.
Force filter operation on CPU
As long as possible, Core Image performs filter operations on the GPU. However, it is possible to roll back to the CPU for execution. Filter Operations can be completed on the CPU with better accuracy, because the GPU is often converted to a faster speed with distortion on floating point computing. When creating a context, you can set the value of the kCIContextUseSoftwareRenderer keyword to true to force the Core Image to run on the CPU.
You can set the CI_PRINT_TREE environment variable in scheme configuration in Xcode to 1 to decide whether to use CPU or GPU for rendering. This will cause the Core Image to print the diagnostic information each time a filter processes the Image to be rendered. This setting is also useful for checking the filter tree of the merged image.
Overview of sample applications
The sample code in this article is an iPhone application that shows a large number of Core Image filters in iOS.
Create a GUI for filter parameters
To demonstrate as many filters as possible, the sample application uses the introspection feature of Core Image to generate an interface for controlling filter parameters supported by it:
The sample application is limited to a single image input and a filter with zero or multiple numeric values. There are also some interesting filters that do not belong to this category (especially those for merging and conversion filters ). Even so, the application still provides a good overview of the features supported by Core Image.
For each input parameter of a filter, there is a slider that can be used to configure the minimum and maximum values of the parameter, and its value is set to the default value. When the value of a slide bar changes, it passes the changed value to its delegate, a class that holds the UIImageView referenced by CIFilter.
Use built-in photo filters
In addition to many other built-in filters, the sample app also displays the photo filters introduced in iOS 7. These filters do not have any parameters that we can adjust, but they deserve to be included because they demonstrate how to simulate the effect of the photo app in iOS:
Conclusion
This article briefly introduces Core Image, a high-performance Image processing framework. We have been trying to present as many features of this framework as possible in such a short form. Now you have learned how to instantiate and concatenate the filter of the Core Image, input and output the Image in the filter chart, and adjust parameters to get the desired result. You also learned how to access the photo filters provided by the system to simulate photo app behavior on iOS.
Now you know enough to write your own photo editing app. With more exploration, you can write your own filters and use the magical power of your Mac or iPhone to perform previously unimaginable effects. Do it now!