quartz2d image Processing

Source: Internet
Author: User
Tags crop image dashed line rotate image

First of all, thank a maple Leaf summed up such a good article, article source: http://www.cnblogs.com/smileEvday/archive/2013/05/25/IOSImageEdit.html

This article will show you common iOS image processing operations include the following four parts: rotation, scaling, cropping, and conversion between pixels and uiimage, the main use of knowledge is quartz2d. Quartz2d is an important part of the Coregraphics framework that can perform almost all of the 2D image rendering and processing functions. As with GDI in window programming, and many concepts are similar.

First, the image rotation

Image rotation is a common operation in the process of image processing, according to the angle of rotation, can be divided into the following two kinds:

  1, Special angle rotation

Special angle rotation refers to the image do 90°,180°,270°, such as this kind of rotation, this kind of rotation operation is usually the most frequent, such as when looking at photos occasionally encountered some direction problems, we only need to make a simple left-hand 90°, right turn 90° can be installed. On the handling of special angle rotation my previous blog, "IOS: Talk about UIImage" has introduced the creation of images when the designation Imageorientation to complete, interested can go to see. This method is very fast and can be displayed correctly in both iOS and Mac systems because it is not involved in a specific drawing operation, but the direction may still be wrong if you pour the picture into the Windows system, as explained in the previous article.

  2. Rotate at any angle

Any angle rotation as the name implies is the image to do any angle rotation, may be 30° or 35° and so on. It is obvious that this kind of rotation cannot be done by imageorientaion, so we have to think of some other way. We know that UIView has a transform property that can be offset, scaled, and rotated by setting the transform. In quartz2d we can also set different transform in the context to complete the corresponding functions, the following we would like to introduce the method of rotation of any angle is based on a series of operations on the context to complete.

This piece you may have a question, ask what makes UIView rotation only need to set a rotation of the transform on it, and the context needs a "series" of transform operation to complete the corresponding function?

The reason is that all of the operations we do through transform in UIView are based on the center point of the view, and the operations we do in the context are based on the coordinates origin of the context. Let's take a look at the diagram of UIView when rotating:

Since the rotation revolves around the center point, we only need one step to go from the original position (the black representation) to the target position (blue), where the angle between the black dashed line and the blue dashed line is the turn. Let's think about it. What happens when the rotation is rotated around the origin of the upper-left corner and the same angle is turned? Please look

  

As shown, the rotation is carried around the origin, although we turn the same angle, but the results are very different. So if you want to rotate a picture at any angle in the context, you have to take at least two steps: rotate and move.

The first step of rotation is very good, the question is how to move from the center of the image after the rotation to the center of the original, this calculation is not so intuitive. So we want to simulate the rotation of uiview, we divide the following three steps:

  

We set the width of the picture to height, and the rotation of the three steps as shown in turn:

A, the context is translated, the origin is moved to the center of the original image, the x, y direction of the translation distance is WIDTH/2,HEIGHT/2.

B, the context of rotation operation.

C. Re-move the center point of the rotated image back to the center of the original, i.e. the translation distance of the x, y direction is-WIDTH/2,-HEIGHT/2 respectively.

In these three steps we can easily realize the image rotation at any angle. You may find that step a moves down half the width of the picture, and in step c it moves in the opposite direction half the width of the picture. Are these two actions not offset? The answer is no, step a in our movement is based on the original coordinate system to move, to step c our movement is based on this time the coordinate system moved, two coordinate system is not the same, so you can one time to complete the rotation of the picture.

The code for the picture rotation is as follows:

uiimage+rotate_flip.m//svimageedit////Created by maple on 5/14/13.//Copyright (c) smileevday. All rights reserved.//#import "Uiimage+rotate_flip.h"/* * @brief Rotate image with Radian */-(uiimage*) Rotateimagewithra Dian: (cgfloat) radian Cropmode: (svcropmode) cropmode{cgsize imgsize = Cgsizemake (Self.size.width * Self.scale, Self.siz    E.height * Self.scale);    Cgsize outputsize = imgsize;        if (Cropmode = = Ensvcropexpand) {CGRect rect = cgrectmake (0, 0, imgsize.width, imgsize.height);        Rect = Cgrectapplyaffinetransform (rect, cgaffinetransformmakerotation (Radian));    Outputsize = Cgsizemake (Cgrectgetwidth (rect), Cgrectgetheight (rect));    } uigraphicsbeginimagecontext (Outputsize);        Cgcontextref context = Uigraphicsgetcurrentcontext ();    CGCONTEXTTRANSLATECTM (context, OUTPUTSIZE.WIDTH/2, OUTPUTSIZE.HEIGHT/2);    CGCONTEXTROTATECTM (context, radian);        CGCONTEXTTRANSLATECTM (context,-IMGSIZE.WIDTH/2,-IMGSIZE.HEIGHT/2); [sElf drawinrect:cgrectmake (0, 0, Imgsize.width, imgsize.height)];    UIImage *image = Uigraphicsgetimagefromcurrentimagecontext ();        Uigraphicsendimagecontext (); return image;

The Cropmode definition is as follows:

enum {    ensvcropclip,               //The image size would be equal to orignal image, some part of the image could be cliped    ENSVCR  Opexpand,             //The image size would expand to contain the whole image, remain area would be transparent};typedef Nsinteger Svcropmode;

In clip mode, the rotated picture is as large as the original, and some areas of the image are cropped out; in expand Mode, the rotated picture may be larger than the original, all the picture information will be preserved, and the remaining area will be fully transparent.

  

Summary: The first part describes the two methods of image rotation, the first method of processing speed block, but only to handle special angular rotation. The second method is slower than the first because it involves the actual drawing and resampling process of generating the picture. In practice, if the first method satisfies the requirement, the first method should be used to complete the rotation of the picture.

Second, image scaling

Image scaling as the name implies, the size of the picture is scaled, due to the different dimensions, so in the process of generating a new diagram, the pixel cannot be one by one corresponding, so there will be interpolation operations. Interpolation is the process of creating new pixels based on the original image and the size of the target graph. The common interpolation algorithms are linear interpolation, bilinear interpolation, cubic convolution interpolation, and so on. There are many algorithms available on the Internet, so you can look at them if you are interested.

Let's look at the schematic diagram of image scaling:

, we assume that the black represents the original size, and that blue represents the scaled size. We enlarge the image twice times, then each pixel in the original will correspond to four pixels in the zoomed image. How to generate four pixels from a single pixel, this is the problem that the interpolation algorithm solves.

Today we mainly talk about iOS image processing, using quartz2d to help us with image scaling, we only need to pass the Cgcontextsetinterpolationquality function to complete the interpolation quality settings. We have no way of knowing what interpolation algorithm to use at the bottom, and we don't need to care. When using quartz2d to solve image scaling, all we need to do is generate a target-sized canvas, then set the interpolation quality, and then use the UIImage draw method to draw the picture into the canvas.

Here's a look at the code:

uiimage+zoom.hUIIMAGE+ZOOM.M

Within this tool class, there are three zoom modes (independent of scaling quality), which are: Ensvresizescale,ensvresizeaspectfit,ensvresizeaspectfill.

A, stretch fill. That is, regardless of the ratio of the width to height in the target size, we will stretch the original image so that it fills the entire target.

B, maintain proportional display. That is, to maximize the original image after zooming, co-workers maintain the proportion of the original image, the remaining areas will be filled with full transparency. This is similar to the uiviewcontentmodescaleaspectfit pattern in Contentmode in Uiimageview.

c, keep proportional fill. That is, the scaled image is still populated on the basis of the original proportions, and some of the images may be cropped. This is similar to the Uiviewcontentmodescaleaspectfill pattern in Contentmode in Uiimageview.

  

Summary: The second part describes the use of quartz2d for image scaling knowledge, we can see that quartz2d help us to complete the image scaling process of interpolation processing, very convenient.

Three, image cutting

Image clipping removes unnecessary areas of the image, keying out the information we want to keep. The following two types can be divided according to the cropping shape:

  1, rectangular cutting

Rectangle cropping is the most common cropping operation, and the operation method is relatively simple. Let's look at the rectangular cropping:

The black box represents the original size, and the blue dashed box represents the size to be cropped. Obviously the cropped image will not be bigger than the original, if you cut the picture larger than the original, it is usually wrong, of course, unless you deliberately. We set the upper-left corner coordinates of the clipping region (x, y), the width height of the cropping is cropwidth,cropheight, the original image width is width,height respectively. To complete the cropping function, we only need three steps:

A, create a canvas for the target size (cropwidth,cropheight).

b, when drawing using the Drawinrect method of UIImage, specify Rect as (-x,-y,width,height).

C. Get the cropped image from the canvas.

The key is to specify the drawing area of the original image in the second step, because we need to get an image starting from the X, y position, a simple coordinate transformation, only need to start from the-x,-y position to draw.

Here is the source code for the cropping section:

  uiimage+svimageedit.m//  svimageedit////  Created by  Maple on 5/8/13.//  Copyright (c) 2013 Maple. All rights reserved.//#import "Uiimage+crop.h" @implementation UIImage (svimageedit)/* * @brief Crop image */-(uiimage*) c Ropimagewithrect: (cgrect) croprect{    cgrect drawrect = CGRectMake (-croprect.origin.x,-CROPRECT.ORIGIN.Y, Self.size.width * Self.scale, Self.size.height * self.scale);        Uigraphicsbeginimagecontext (croprect.size);    Cgcontextref context = Uigraphicsgetcurrentcontext ();    Cgcontextclearrect (context, CGRectMake (0, 0, cropRect.size.width, cropRect.size.height));        [Self drawinrect:drawrect];        UIImage *image = Uigraphicsgetimagefromcurrentimagecontext ();    Uigraphicsendimagecontext ();        return image; @end

  

  2, Arbitrary shape cutting

A more typical example of any shape clipping is the keying of a magnetic lasso in photo, which controls the area of the image to be deducted by specifying a series of key points. The implementation of this clipping is slightly more complicated than the rectangle clipping, mainly used in the quartz2d two knowledge: path,clipping area. Any shape is cropped as follows:

The black box represents the original size, the dashed line represents the actual cropped shape, the blue box represents the border of the time clipping path, and the completion of any shape cutout usually requires the following six steps:

A croprect the size and position of the entire cropping area by the given set of points, that is, the size of the target canvas and the position of the upper-left corner of the cropping area.

There are usually two ways to do this: the first creates an empty canvas, then begins a path, adds all the points to the path, and Cgcontextgetpathboundingbox gets the border to the cropped area. Or create a Mutablepath directly, and then add all points to that path by getting the border of the cropped area through Cgpathgetboundingbox. Of course, you can calculate the border of the cropped area by walking through each point of the point set, finding the coordinates of the smallest point and the coordinates of the maximum point.

B. Create a canvas with a target size.

C, open a path in the target canvas, and then add all points to path.

This requires a move to path, because the set of points passed in is relative to the origin of the original, so we need to do one for that path (-croprect.origin. X,-croprect.origin. Y) Panning operation.

D. Set the clipping area through the path.

E, when drawing using the Drawinrect method of UIImage, specify Rect as (-croprect.origin. X,-croprect.origin. x, Croprect.size.width,croprect.size.height).

F. Get the target image from the canvas.

  Here is the source code for any shape clipping:

UIIMAGE+SVIMAGEEDIT.M

Summary: The third part describes two kinds of cropping: rectangular cropping, arbitrary shape clipping, the main use of knowledge is quartz2d in the path and clipping area.

Get the pixels of the images in uiimage and create them using pixels uiimage

  UIImage is a tool class for storing and drawing images in Uikit, which can open images of common jpg,png,tif and other formats. This class is typically used in iOS for everyday use, but sometimes we also need to get the pixels of the image for finer-grained editing operations, such as grayscale, binary, and so on.

  1. Get pixels from UIImage

  To get the pixels of the image represented by UIImage, we need to use the Cgbitmapcontext in quartz2d, Before we created the Bitmapcontext is a convenient way to use the Uikit Uigraphicsbeginimagecontext, the benefits of this method is easy to use, but the ease of use also led to a lot of details we can not control. In order to get the pixels in the picture we need to use the lower level of the Cgbitmapcontextcreate method, which needs to refer to the location depth (the byte occupied by each bit in RGB), the color space (mentioned in the previous blog), and the alpha information.

The following four steps are required to complete the acquisition of Pixels:

A, the application image size of memory.

B. Use the Cgbitmapcontextcreate method to create the canvas.

C. Draw the image into the canvas using the UIImage draw method.

D. Use the Cgbitmapcontextgetdata method to get the pixel data corresponding to the canvas.

The code is as follows:

Return bmpdata is rgba-(BOOL) Getimagedata: (void**) Data width: (nsinteger*) Width height: (nsinteger*) Height Alphainfo    :(cgimagealphainfo*) alphainfo{int imgwidth = self.size.width * Self.scale;        int imghegiht = Self.size.height * Self.scale;    Cgcolorspaceref colorspace = Cgcolorspacecreatedevicergb ();        if (colorspace = = NULL) {NSLog (@ "Create colorspace error!");    return NO;    } void *imgdata = NULL;    Imgdata = malloc (ImgWidth * imghegiht * 4);        if (Imgdata = = NULL) {NSLog (@ "Memory error!");    return NO; } cgcontextref Bmpcontext = Cgbitmapcontextcreate (Imgdata, ImgWidth, Imghegiht, 8, ImgWidth * 4, ColorSpace, KCGIma    Gealphapremultipliedlast); Cgcontextdrawimage (Bmpcontext, cgrectmake (0, 0, ImgWidth, imghegiht), self.        Cgimage);    *data = Cgbitmapcontextgetdata (Bmpcontext);    *width = ImgWidth;    *height = imghegiht;        *alphainfo = Kcgimagealphalast;    Cgcolorspacerelease (ColorSpace); Cgcontextrelease (BmpcontEXT); return YES;}

  

  2. Create UIImage from pixels

The above is about getting pixels from uiimage, and after we edit the pixels, most of the time we need to regenerate the uiimage and show it. This part of the logic is similar to the previous part, creating the canvas by passing in the pixels, then obtaining the cgimage from the canvas through the Cgbitmapcontextcreateimage method, and finally creating the uiimage. Note If the alpha information you specify needs to correspond to the actual pixel format, you will get an incorrect effect.

Here is the source code for creating uiimage from pixels:

The data should be RGBA format+ (uiimage*) Createimagewithdata: (byte*) Data width: (nsinteger) Width height: (nsinteger) Height Alphainfo: (cgimagealphainfo) alphainfo{    cgcolorspaceref colorspaceref = Cgcolorspacecreatedevicergb ();    if (!colorspaceref) {        NSLog (@ "Create colorspace error!");    }    Cgcontextref Bitmapcontext = cgbitmapcontextcreate (data, width, height, 8, Width * 4, colorspaceref, Kcgimagealphapremult Ipliedlast);    if (!bitmapcontext) {        NSLog (@ "Create Bitmap context error!");        Cgcolorspacerelease (colorspaceref);        return nil;    }        Cgimageref imageref = Cgbitmapcontextcreateimage (bitmapcontext);    UIImage *image = [[UIImage alloc] initwithcgimage:imageref];    Cgimagerelease (imageref);    Cgcolorspacerelease (colorspaceref);    Cgcontextrelease (bitmapcontext);    return image;

  

Summary: Part Four mainly discusses the conversion between uiimage and the actual pixel data, the most critical function of the whole process is cgbitmapcontextcreateimage, if the incoming parameter is wrong, you may get the wrong result.

#2楼 [Landlord] 2013-05-26 01:27 One Piece-Maple Leaf

@ Star's Wish

quartz2d image Processing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.