Detailed OS X and iOS image processing Framework core image

Source: Internet
Author: User
Tags file url image filter

Transferred from: Http://www.csdn.net/article/2015-02-13/2823961-core-image

Summary:This article combines examples of the use of OS X and iOS image processing framework core image, and how to create and use the built-in filters for iOS with core image, ideal for beginners. Although the sample code is an iOS program written in Swift, the implementation concept is easy to convert to objective-c and OS X.

This article will introduce you to the core image for beginners, an OS X and iOS graphics processing framework.

If you want to follow the code in this article, you can download the example project on GitHub. The sample project is an iOS application that lists a large number of image filters provided by the system for selection and provides a user interface to adjust the parameters and observe the effects.

Although the sample code is an iOS program written in Swift, the implementation concept is easy to convert to objective-c and OS X.

Basic Concepts

Speaking of Core Image, we first need to introduce a few basic concepts.

A filter is an object that has a lot of input and output, and performs some transformations. For example, a blur filter may require an input image and a blur radius to produce the appropriate blurred output image.

A filter chart is a linked filter network (no loop graph) so that the output of one filter can be the input of another filter. In this way, you can achieve a well-crafted effect. Below we will see how to connect the filter to create a retro photo effect.

familiarity with the core Image API

With these concepts in place, we can start exploring the image filter details for core image.

Core Image Schema

The Core image has a plug-in architecture, which means it allows users to write custom filters and integrate with the system-provided filters to extend their functionality. We don't use the extensibility of core image in this article, I mention it just because it affects the framework's API.

The Core Image is used to maximize the use of the hardware on which it is running. The actual implementation of each filter, the kernel, is written by a subset of the GLSL (that is, OpenGL's shading language). When multiple filters are connected to a single filter chart, the core image strings together to build an efficient program that can run on the GPU.

The Core image will delay the work whenever possible. Typically, allocations or processes do not occur until the output of the last filter in the filter chart is requested.

To get the job done, the Core image requires an object called context. This context is where the framework really works, it needs to allocate the necessary memory and compile and run the filter kernel to perform image processing. Creating a context is very expensive, so you will often want to create a context that is reused. Next we'll see how to create a context.

Querying for available filters

The Core image filter is created by name. To get a list of system filters, we want the name of the filter to be requested from the Kcicategorybuiltin category of the core Image:

[CPP]View Plaincopy
    1. Let Filternames = Cifilter.filternamesincategory (Kcicategorybuiltin) as [String]

The list of filters available on iOS is very close to a subset of the filters available on OS X. There are 169 built-in filters on OS X and 127 on iOS.

Create a filter by name

Now that we have a list of available filters, we can create and use filters. For example, to create a Gaussian blur filter, we pass the corresponding name to the Cifilter initialization method:

[CPP]View Plaincopy
    1. Let Blurfilter = Cifilter (named:"Cigaussianblur")

Set Filter Parameters

Because of the plug-in structure of core image, most filter properties are not set directly, but are set by key-value encoding (KVC). For example, to set the blur radius for a blur filter, we use KVC to set the Inputradius property:

[CPP]View Plaincopy
    1. Blurfilter.setvalue (10.0 forkey:"Inputradius")

Because this method requires anyobject? (That is, the ID in objective-c) as its parameter value, it is not type-safe. Therefore, it is prudent to set the filter parameters to ensure that the type of the value you are passing is correct.

Query Filter Properties

In order to know what input and output parameters a filter provides, we can get the Inputkeys and Outputkeys arrays separately. They all return an array of nsstring.

To get more information on each parameter, we can look at the attributes dictionary provided by the filter. Each input and output parameter name is mapped to its own dictionary, describing what it is, and if so, its maximum and minimum values. For example, here is a dictionary of inputbrightness parameters for the Cicolorcontrols filter:

[CPP]View Plaincopy
    1. Inputbrightness = {
    2. Ciattributeclass = NSNumber;
    3. Ciattributedefault = 0;
    4. ciattributeidentity = 0;
    5. Ciattributemin =-1;
    6. Ciattributeslidermax = 1;
    7. Ciattributeslidermin =-1;
    8. Ciattributetype = Ciattributetypescalar;
    9. };

For numeric parameters, the dictionary contains the kciattributeslidermin and Kciattributeslidermax keys to limit the expected input fields. Most parameters also contain a kciattributedefault keyword that maps to the default value of the parameter.

Photo Filter Combat

The work of the image filter consists of three parts: building and configuring the filter chart, sending an image waiting for the filter to be processed, and getting the image processed by the filter. This is described in detail in the following sections.

Build a filter chart

Building a filter chart consists of these parts: Instantiate the filters we need, set their parameters, connect them so that the image data passes through each filter sequentially.

In this section, we will create a filter chart that is used to make the 19th century tin version of the photo-style image. We have two effects linked together to achieve this effect: the simultaneous de-saturation and tint of the black and white filter, and a dark corner filter to create a shaded effect of the framed picture.

Prototyping a core image filter chart with quartz Composer is useful and can be downloaded from the Apple Developer website. Below, we've collated the desired photo filters and strung together a black and white filter and a dark corner filter:

Once we have achieved our satisfactory results, we can re-create the filter chart in the code:

[CPP]View Plaincopy
  1. Let Sepiacolor = Cicolor (red:0.76, green:0.65, blue:0.54)
  2. Let Monochromefilter = Cifilter (name: "Cicolormonochrome",
  3. Withinputparameters: ["Inputcolor": Sepiacolor, "inputintensity": 1.0])
  4. Monochromefilter.setvalue (Inputimage, Forkey: "Inputimage")
  5. Let Vignettefilter = Cifilter (name: "Civignette",
  6. Withinputparameters: ["Inputradius": 1.75, "inputintensity": 1.0])
  7. Vignettefilter.setvalue (Monochromefilter.outputimage, Forkey: "Inputimage")
  8. Let Outputimage = Vignettefilter.outputimage

It is important to note that the black and white filter's output image becomes the input image of the dark filter. This causes the vignetting effect to be applied to the black-and-white image. Also note that we can specify parameters in the initialization, rather than having to set them individually with KVC.

Create an input image

The Core image filter requires that its input image be of type ciimage. For iOS programmers this may be a bit unusual, because they are more accustomed to using uiimage, but the difference is worth it. A ciimage instance is actually more comprehensive than uiimage because the ciimage can be infinitely large. Of course, we can't store unlimited images in memory, but conceptually, this means you can get image data from any area on the 2D plane and get a meaningful result.

All of the images we use in this article are limited, and it's easy to create a ciimage from a uiimage. In fact, this only requires a single line of code:

[CPP]View Plaincopy
    1. Let Inputimage = Ciimage (image:uiimage)

There is also a convenient initialization method to create ciimage directly from the image data or the file URL.

Once we have a ciimage, we can set it as the filter's input image by setting the filter's Inputimage parameter:

[CPP]View Plaincopy
    1. Filter.setvalue (Inputimage, Forkey:"Inputimage")

Get a picture of a filter processed

The filter has a property named Outputimage. As you may have guessed, it is of the ciimage type. So, how do we implement a reverse operation that creates uiimage from a ciimage? Well, even though we've spent all our time building a filter chart, it's time to call Cicontext's power to actually do the image filter processing.

The simplest way to create a context is to pass a nil dictionary to its construction method:

[CPP]View Plaincopy
    1. Let Cicontext = Cicontext (Options:nil)

To get an image processed by a filter, we need to cicontext create a cgimage from within a rectangle of the output image, the range of incoming images (bounds):

[CPP]View Plaincopy
    1. Let Cgimage = Cicontext.createcgimage (Filter.outputimage, FromRect:inputImage.extent ())

The reason we use the input image size is that the output image usually has a different size ratio than the input image. For example, a blurred image has some extra pixels around its bounds because the sample is outside the edge of the input image.

Now we can create a uiimage from this newly created cgimage:

[CPP]View Plaincopy
    1. Let UiImage = UiImage (cgimage:cgimage)

Creating a uiimage directly from a ciimage is also possible, but this approach is a bit depressing: if you try to display such an image on a uiimageview, its Contentmode property is ignored. Using the cgimage of the transition requires an extra step, but you can save this annoyance.

using OpenGL to improve performance

Drawing a cgimage with the CPU is time-consuming and wasteful, and it only passes the results back to Uikit for compositing. Instead of going around the core graphics, we want to be able to draw the image after applying the filter on the screen. Fortunately, thanks to the interoperability of OpenGL and core image, we can do this.

To share resources between the OpenGL context and the core image context, we need to create our cicontext in a slightly different way:

[CPP]View Plaincopy
    1. Let Eaglcontext = Eaglcontext (API:. OPENGLES2)
    2. Let Cicontext = Cicontext (Eaglcontext:context)

Here, we created a eaglcontext with the feature set of OpenGL ES 2.0. This GL context can be used as a Glkview backing context or used to draw into a caeagllayer. The sample code uses this technique to effectively draw an image.

When a CICONTEXT has the context of the associated GL, the filter-processed image can be drawn using OpenGL, as in the following method:

[CPP]View Plaincopy
    1. Cicontext.drawimage (Filter.outputimage, Inrect:outputbounds, Fromrect:inputbounds)

As before, the Fromrect parameter is part of the image that is drawn with the image's coordinate space after the filter is processed. This inrect parameter is a rectangle of the coordinate space of the GL context applied to the image that needs to be drawn. If you want to maintain the aspect ratio of the image, you may need to do some mathematical calculations to get the proper inrect.

Forcing filter operations on the CPU

Whenever possible, the Core image performs a filter operation on the GPU. However, it does have the possibility of rolling back to execution on the CPU. The filter operation can be done on the CPU with better accuracy because the GPU often takes distortion at a faster rate on floating-point computations. When creating a context, you can force the Core image to run on the CPU by setting the value of the Kcicontextusesoftwarerenderer keyword to true.

You can decide whether to render with CPU or GPU by setting the CI_PRINT_TREE environment variable in the plan configuration (Scheme config) in Xcode. This will cause the core image to print diagnostic information each time a filter is processed to render the image. This setting is also useful for checking the synthetic image filter tree.

Sample Apps at a glance

The sample code for this article is an iphone app that showcases a large variety of core image filters in iOS.

Create a GUI for filter parameters

To demonstrate as many filters as possible, the sample application takes advantage of the introspective nature of core image to generate an interface to control the filter parameters it supports:

The sample application is limited to a single image input and 0 or more numeric input filters. There are also some interesting filters that do not fall into this category (especially those compositing and converting filters). Even so, the application still provides a good overview of the features supported by Core image.

For each filter's input parameters, there is a slider bar that can be used to configure the minimum and maximum values for the parameter, and its value is set to the default value. When the value of the slider changes, it passes the changed value to its delegate, an Uiimageview subclass that holds the Cifilter reference.

Use the built-in photo filter

In addition to many other built-in filters, the sample application also shows the photo filters introduced in iOS 7. These filters do not have parameters that we can adjust, but they are worth being included because they show how to simulate the effects of a photo application in iOS:

Conclusion

This article provides a brief introduction to the high-performance image processing framework of core image. We have been trying to demonstrate the functionality of this framework as much as possible in such a short form. You have now learned how to instantiate and concatenate the core image filters, pass in and out images in the filter chart, and adjust the parameters to get the desired results. You also learned how to access the system-provided photo filters to simulate the behavior of the photo app on iOS.

Now you know enough to write your own photo-editing application. With some more exploration, you can write your own filters and use the magical power of your Mac or iphone to perform previously unimaginable effects. Go ahead and do it!

Reference

    • Core Image Reference Collection is the authoritative set of documents for core image.
    • The core Image filter reference contains a complete list of image filters provided by the core image, as well as usage examples.
    • If you want to write a more functional style core image code, you can look at Florian Kluger in the Objccn.io topic #16 article.
This article was reproduced from: OBJCCN

Detailed OS X and iOS image processing Framework core image

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.