Core Image Programming guide--Graphics Programming GuideFirst, Introduction
Core image is a technology for processing and analyzing images that is designed to provide near real-time processing of static and video images. Core image hides the low-level graphics process and provides an easy-to-use program interface (API). You don't need to know the details of OpenGL ES, and you don't need to know gcd. It handles this for you.
1. The Core Image Framework provides:
1) access to the built-in image processing filter
2) ability to detect features
3) Support Automatic image enhancement
4) Connect multiple filter to create the desired effect.
2. Core image provides more than 90 built-in filter for iOS and more than 120 filter for OS X. You set the input parameters of the filter by providing key-value pairs. The output of filter can be used as input to another filter so that multiple filter can be connected to form the desired effect.
Filter has a dozen more categories. Some are designed to achieve artistic effects, such as stylize and Halfone filter categories. Others are designed to optimize corrected image problems, such as color adjustment and sharpen filters.
Core Image analyzes the quality of an image and provides a series of filter to optimize and adjust these, such as chroma, contrast, and hue.
The Core image recognizes the face features of still images and can track them in video images. Knowing where your face is, you can decide where to apply the filter.
3. Core image has a built-in filter related document. You can query the system to find out which filter is available. Then, for each filter, you can get a dictionary that contains its properties, such as input parameters, default parameter values, minimum and maximum values, display name, and so on.
Second, processing the image:
Core image has 3 classes to support image processing:
1) Cifilter: Represents an effect that has at least one input parameter and outputs an output image.
2) Ciimage: Immutable object that represents an image. You can synthesize image data or provide it from one file or from the output of another Cifilter object.
3) Cicontext: The object provided from a filter for the core image drawing results.
1. Overview
See examples directly:
Cicontext *context = [Cicontext Contextwithoptions:nil]; 1 Creating a Cicontext Object
Ciimage *image = [Ciimage Imagewithcontentsofurl:myurl]; 2 Creating a Ciimage Object
Cifilter *filter = [Cifilter filterwithname:@ "Cisepiatone"]; 3 Create a filter and set its input parameters
[Filter Setvalue:image Forkey:kciinputimgekey];
[Filter Setvalue:[nsnumber numberwithfloat:0.8f] forkey:@ "inputintensity"];
Ciimage *result = [Filter Valueforkey:kcioutputimagekey]; 4 Get the output image, whose outputs are a recipe for how to process the image and are not actually rendered.
Cgimageref cgimage = [Context Createcgimage:result fromrect:[result extent];//5 render ciimage to Cgimage.
Important Notice: Some filter processing images are infinite fields, such as filter in the Cicategorytileeffect category. Before rendering, an infinite image must either be cropped (using Cicrop Filter), or you must specify a finite dimension of rectangle to render the image.
2. Built-in filter:
A filter category specifies the type of Effect--blur (blur), distortion (morph), generator (generator?), and so on---or specifies its use target---still image (still image), video , nonsquare pixels (non-square pixels), and so on. A filter can belong to more than one category. A filter also has a display name, which is displayed to the user, and a filter name, which is the name you use to access the filter by code.
Most filter has multiple input parameters, allowing you to control how the processing is done. Each input parameter has a attribute class that specifies its data type, such as NSNumber. An input parameter can have other properties, such as its default value, the minimum and maximum allowable values, the display name of the input parameter, or any other property described in the Cifilter class.
For example, Cicolormonochrome filter has 3 input parameters-an image to be processed, a monochrome color, and a color intensity (colour intensity). You need to provide images and options--color and color intensity. Most filters, including the Cicolormonochrome filter, set default values for each non-image parameter.
Core image uses KVC, which means that you can get and set a filter's properties through KVC.
3. Create a core Image Context:
1) Method:
A) Contextwithoptions:: Renderer can be CPU or GPU, iOS support only.
b) Contextwithcgcontext:options: And nsgraphicscontext:renderer can be CPU or GPU, only OSX support.
c) Contextwitheaglcontext: and contextwitheaglcontext:options:: Renderer can be CPU, iOS only support.
d) ContextWithGLLContext:pixelFormat:options: and contextWithCGLContext:pixelFormat:colorSpace:options: : Renderer can be CPU, only OSX support.
2) When you don't need real-time performance, create a core Image Context:
Cicontext *context=[cicontext Contextwithoptions:nil];
If you want to specify whether the CPU or GPU is renderering, you can create an options dictionary and set the Kcicontextusersoftwarerenderer key to the corresponding Boolean value. CPU rendering is slower than GPU, but using gpu,resulting image will not be displayed until it is copied back to CPU memory and converted to another image type.
3) When you need real-time performance, create Cicontext:
You should create from EAGL context instead of using contextwithoptions:, and specify the GPU. The advantage is that rendered image resides in the GPU and is never copied back into CPU memory. First you need to create a EAGL Context:
Myeaglcontext=[[eaglcontext Alloc]initwithapi:keaglrenderingapiopengles2];
Then use method Contextwitheaglcontext: To create the Cicontext object.
You should turn off color management (by providing a null value to working color space). The Color management reduces performance.
Nsmutabledictionary *options=[[nsmutabledictionary Alloc]init];
[Options Setobject:[nsnull NULL] forkey:kcicontextworkingcolorspace];
Mycontext=[cicontext Contextwitheaglcontext:myeaglcontext Options:options];
4) Create Cicontext in OSX
...
4. Create the Ciimage object:
Always remember that ciimage is just a prescription for dealing with actual images. Ciimage actually does not produce any pixels until its result is rendered to destination.
1) Create a method:
A) method of image source as URL: imagewithcontentsofurl: and Imagewithcontentsofurl:options
b) Image source for quartz 2D image (Cgimageref): imagewithcgimage:
c) The image source is bitmap Data:ImageWithBitmapData:bytesPerRow:format:colorSpace: and ImageWithImageProvider:size:format: Colorspace:options:
D) The image source is Emcode Data:imagewithdata:
e) The image source is OpenGL Texture:imageWithTexture:size:flipped:colorSpace:
f) ...
5. Create Cifilter:
Use Filterwithname: To obtain the filter,name must be the filter name of the filter.
In iOS, the inverse of the filter, the default value of its input parameters has been set, but in OS X, its input parameters are not defined at first, you also need to use SetDefaults to set the default values.
If you don't know the input parameters of a filter, you can use Inputkeys to get them. Use Setvalue:forkey: To set the value.
Let's take a look at an example and use filter to adjust the color of the image.
First get this filter, it's called Cihueadjust.
Cifilter *hueadjust = [Cifilter filterwithname:@ "Cihueadjust"];
This filter has 2 input parameters, one is input Image and the other is input angle. The input angle is related to the position of hue in HSV and HLS color space. This angle value can be between 0.0 and 2 pi. 0 indicates color red; 2/3 pi indicates green; 4/3 pi represents Blue.
[Hueadjust setvalue:image Forkey: @ "Inputimage"];
[Hueadjust setValue: [NSNumber numberwithfloat:2.094] Forkey: @ "Inputangle"];
6. Get Output Image:
Use the [email protected] "outputimage" key to get the output:
Ciimage *result=[hueadjust valueforkey:@ "Outputimage"];
7. Rendering (render) The resulting output Image:
A) method DrawImage:inRect:fromRect: In iOS, this method renders only the cicontext created using Contextwitheaglcontext:.
b) Methods Createcgimage:fromrect: and CreateCGImage:fromRect:format:colorSpace:
c) Method Render:toBitmap:rowBytes:bounds:format:colorSpace:
D) method Createcglayerwithsize:info: valid only on iOS
E) Methods Render:tocvpixelbuffer: And Render:toCVPixelBuffer:bounds:colorSpace: Only available in iOS
8. Thread Safety:
Cicontext and Ciimage are immutable, which means they are thread-safe. However, the Cifilter object is mutable. A cifilter cannot be shared across multiple threads. If your app is multithreaded, each thread must create its own cifilter. Otherwise, your app will behave unexpectedly.
9, Chaining Filters:
10. Using transition Effects:
Transition are generally used between different images, or different scenes of the switch. These effects take time to render and require you to set a timer. The purpose of this section is to show how to set up a timer. You will learn how to design and apply a filter (Cicopymachine) to two static images. Cicopymachine Filter creates a light bar as you can see when you copy machine or image scanner. Light bar sweeps the original image from left to right and displays the target image. As shown in the following:
Steps:
1) Create CImage object for transition
2) set up and schedule a timer
3) Create Cicontext Object
4) Create Cifilter Object
5) Set the input parameters of the filter.
6) Set the source and target image to be processed
7) Calculation time
8) Apply Filter
9) Drawing Results
10) Repeat 8-10 until transition ends.
Example: Suppose you customize a view.
1) Set the source image and target image in awakefromnib and set the timer:
。。。 Omitted
[[Nsrunloop Currentrunloop] Addtimer:timer Formode:nsdefaultrunloopmode];
[[Nsrunloop Currentrunloop] Addtimer:timer Formode:nseventtrackingrunloopmode];
2) Set Transition filter:
-(void) setuptransition
{
Civector *extent;
float w,h;
W = thumbnailwidth;
h = thumbnailheight;
extent = [Civector vectorwithx:0 y:0 z:w w:h];
Transition = [Cifilter filterwithname: @ "cicopymachinetransition"];
Set defults on OS X; Not necessary on IOS
[Transition setdefaults];
[Transition setvalue:extent Forkey: @ "inputextent"];
}
3) in DrawRect: Method
-(void) DrawRect: (nsrect) Rectangle
{
float T;
CGRect CG = CGRectMake (Nsminx (Rectangle), nsminy (Rectangle), nswidth (Rectangle), nsheight (rectangle));
t = 0.4* ([nsdate timeintervalsincereferencedate]-base);
if (context = = nil)
{
context = [Cicontext contextwithcgcontext: [[Nsgraphicscontext CurrentContext] graphicsport] options:nil];
}
if (transition = = nil)
[Self setuptransition];
[Context drawImage: [Self Imagefortransition:t + 0.1] INRECT:CG FROMRECT:CG];
}
。。。 Check it out when you need it.
11. Apply Filter To Video:
Core image and core video can work together to achieve a lot of results. is primarily in OS X. Skip Over
Third, detect the face in the image:
Core image is face detection, not face recognition. Detection is a rectangle that contains a face feature that identifies a particular person's face. After the core image detects a face, it can provide information about the face, such as the position of the eye and mouth. It can also track a specified face in the video.
1. Detection face:
Use the Cidetector class to find a face in a picture
Cicontext *context = [Cicontext Contextwithoptions:nil]; 1
Nsdictionary *opts = [nsdictionary dictionarywithobject:cidetectoraccuracyhigh forkey:cidetectoraccuracy]; 2 Specify the detector option, here is the specified precision
Cidetector *detector = [Cidetector detectoroftype:cidetectortypeface context:context options:opts]; 3
opts = [Nsdictionary dictionarywithobject: [[MyImage Properties] valueforkey:kcgimagepropertyorientation] ForKey: Cidetectorimageorientation]]; 4 It is important for the core image to know the direction of the image.
Nsarray *features = [detector featuresinimage:myimage options:opts]; 5 contains a set of Cifacefeature objects, each Cifacefeature object representing a face that can be used to obtain features such as the position of the eye and mouth.
2. Get face and face features:
Facial features include:
1) position of left and right eye
2) position of the mouth
3) Tracking ID and tracking frame count, the Core image used for tracking in the video
After you have used Cidetector to get a set of Cifacefeature objects, you can loop through the array to check each face's bounds and its characteristics.
For (Cifacefeature *f in Features)
{
NSLog (@ "%@", Nsstringfromcgrect (F.bounds));
if (f.haslefteyeposition)
printf ("Left eye%g%g\n", f.lefteyeposition.x, F.LEFTEYEPOSITION.Y);
if (f.hasrighteyeposition)
printf ("Right eye%g%g\n", f.righteyeposition.x, F.RIGHTEYEPOSITION.Y);
if (f.hasmouthposition)
printf ("Mouth%g%g\n", f.mouthposition.x, F.MOUTHPOSITION.Y);
}
Iv. Auto Enhancing Images:
The automatic enhancement of Core image analyzes the histogram (histogram), face area content, and metadata properties of an image. It then returns an array of Cifiter, where the input parameters of the Cifiter are already set to a good value to improve the parsed image.
1, Auto enhancement Filters:
The following is a list of the filters used by core image to automatically enhance the image, which filters remedy some of the most commonly used issue in the picture.
1) ciredeyecorrection filter: Used to modify Red/amber (amber)/white eye due to camera flash
2) cifacebalance filter: Used to adjust the color of the human face
3) Civibrance Filter: To increase the saturation of images without deformed skin tones
4) Citonecurve filter: Used to adjust the contrast of the image
5) Cihighlightshadowadjust filter: Used to adjust shadow details.
2. Use Auto enhancement Filters:
There are only 2 methods: Autoadjustmentfilters and Autoadjustmentfilterswithoptions:. In most cases, you will use an options dictionary.
You can set options like this:
1) The direction of the image, which is important for ciredeyecorrection and cifacebalance filter.
2) Apply red Eye correction only: set Kciimageautoadjustenhance to No.
3) Whether all filters except red eye correction are applied: set Kciimageautoadjustredeye to No.
Autoadjustmentfilterswithoptions: Method returns a set of options filters, and you will use these filters chain together and apply to image. As shown in the following example:
Nsdictionary *options = [nsdictionary dictionarywithobject: [[Image Properties] Valueforkey: Kcgimagepropertyorientation] forkey:cidetectorimageorientation];
Nsarray *adjustments = [MyImage autoadjustmentfilterswithoptions:options];
For (Cifilter *filter in adjustments) {
[Filter Setvalue:myimage Forkey:kciinputimagekey];
MyImage = Filter.outputimage;
}
V. Filters of Inquiry System:
1. Get a list of filters and attributes:
Use Filternamesincategory: And Filternameincategories: Methods to discover which filters are available.
You can pass the nil parameter to Filternameincategories: To get all the filters of all categories.
The category constants for effect types are listed below.
Slightly kcicategorydistortioneffect, geometryadjustment, Compositeoperation 、。。。
The category constants that use the target type are listed below:
Slightly kciimagestillimage, Video, interlaced, Nonsquarepixels, Highdynamicrange
The category constants for the filter source are listed below:
Kcicategorybuiltin: Represents the filter provided by the core image
After you have obtained a set of filter names, you can retrieve the properties of the filter.
Cifilter *filter;
Nsdictionary *myfilterattributes;
Myfilter=[cifilter filterwithname:@ "<filter name>"];
Myfilterattributes=[myfilter attributes];
2. Create a Filters Options dictionary:
The parameter value type is Boolean, you can use a checkbox, a range can also be marked out, using its default value is the initial value and so on.
The filter's name and attributes provide all the information to create a user interface that allows the user to select a filter and control its input parameters.
Note: If you are interested in creating a user interface for a core image filter, you can view the Ikfilteruiview Class Reference, which provides a view that contains the parameter control of the core image filter.
Vi. subclassing cifilter:recipes for Custom Effects
Example 1:chroma Key Filter Recipe: Removes a color or a group of colors from the source image, and then mixes the source image with a background picture.
Example 2: Preserve the face in the source image and fade the rest
Example 3: Keep a location in the source image, and other locations are blurred.
Example 4: Blur the face in the source image, and the rest of the site does not change.
Example 5: Change between two images, using a little bit of pixel change in the middle
Example 6: Old cinematic (ie, the picture becomes black and white?) )
Seven, get the best performance
Eight, using feedback to process the image:
The Ciimageaccumulator class (valid only in iOS) is ideal for handling feedback-based selections.
Nine, before you write a custom filter, what do you need to know?
X. Create a custom filters
XI. package and load image Units:This is copy someone else's!!!