IOS uses openCv to implement simple portrait deduction

Source: Internet
Author: User
Tags color gamut

IOS uses openCv to implement simple portrait deduction

Recently, to implement the portrait deduction function, I found two ways to buckle images online: coreImage Color Gamut and openCv edge detection, the deduction is accurate. The second method is suitable for complex backgrounds, but the default deduction is not accurate, for example, 1. photos before processing

2. Processed photos



CoreImage has many implementations on the Internet, and there are also many articles. I will not talk about them much. I just paste the code I implemented, and the code can be used after it is stuck, import CubeMap. c

// CoreImage graph createCubeMap (value 1, value 2) value range: 0 ~ 360 deduct the color from value 1 to value 2

CubeMap myCube = createCubeMap (self. slider1.value, self. slider2.value );

NSData * myData = [[NSData alloc] initWithBytesNoCopy: myCube. data length: myCube. length freeWhenDone: true];

CIFilter * colorCubeFilter = [CIFilter filterWithName: @ CIColorCube];

[ColorCubeFilter setValue: [NSNumber numberWithFloat: myCube. dimension] forKey: @ inputCubeDimension];

[ColorCubeFilter setValue: myData forKey: @ inputCubeData];

[ColorCubeFilter setValue: [CIImage imageWithCGImage: _ preview. image. CGImage] forKey: kCIInputImageKey];

 

CIImage * outputImage = colorCubeFilter. outputImage;

CIFilter * sourceOverCompositingFilter = [CIFilter filterWithName: @ CISourceOverCompositing];

[SourceOverCompositingFilter setValue: outputImage forKey: kCIInputImageKey];

[SourceOverCompositingFilter setValue: [CIImage imageWithCGImage: backgroundImage. CGImage] forKey: kCIInputBackgroundImageKey];

 

OutputImage = sourceOverCompositingFilter. outputImage;

CGImage * cgImage = [[CIContext contextwitexceptions: nil] createCGImage: outputImage fromRect: outputImage. extent];

 

Next I will talk about how ios uses openCv to perform graph deduction.
Download opencv2. For details about how to get opencv in IOS, please refer to the blog post I wrote earlier: Ghost

 

Import the following header files:

# Import

# Import UIImage + OpenCV. h

 

UIImage + OpenCV class

 

 

//

// UIImage + OpenCV. h

 

# Import

# Import

@ Interface UIImage (UIImage_OpenCV)

 

+ (UIImage *) imageWithCVMat :( constcv: Mat &) cvMat;

-(Id) initWithCVMat :( constcv: Mat &) cvMat;

 

@ Property (nonatomic, readonly) cv: Mat CVMat;

@ Property (nonatomic, readonly) cv: Mat CVGrayscaleMat;

 

@ End

 

 

//

// UIImage + OpenCV. mm

 

# Import UIImage + OpenCV. h

 

Staticvoid ProviderReleaseDataNOP (void * info, const void * data, size_t size)

{

// Do not release memory

Return;

}

 

 

 

@ Implementation UIImage (UIImage_OpenCV)

 

-(Cv: Mat) CVMat

{

 

CGColorSpaceRef colorSpace = CGImageGetColorSpace (self. CGImage );

CGFloat cols = self. size. width;

CGFloat rows = self. size. height;

 

Cv: Mat cvMat (rows, cols, CV_8UC4); // 8 bits per component, 4 channels

 

CGContextRef contextRef = CGBitmapContextCreate (cvMat. data, // Pointer to backing data

Cols, // Width of bitmap

Rows, // Height of bitmap

8, // Bits per component

CvMat. step [0], // Bytes per row

ColorSpace, // Colorspace

KCGImageAlphaNoneSkipLast |

KCGBitmapByteOrderDefault); // Bitmap info flags

 

CGContextDrawImage (contextRef, CGRectMake (0, 0, cols, rows), self. CGImage );

CGContextRelease (contextRef );

 

Return cvMat;

}

 

-(Cv: Mat) CVGrayscaleMat

{

CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray ();

CGFloat cols = self. size. width;

CGFloat rows = self. size. height;

 

Cv: Mat cvMat = cv: Mat (rows, cols, CV_8UC1); // 8 bits per component, 1 channel

 

CGContextRef contextRef = CGBitmapContextCreate (cvMat. data, // Pointer to backing data

Cols, // Width of bitmap

Rows, // Height of bitmap

8, // Bits per component

CvMat. step [0], // Bytes per row

ColorSpace, // Colorspace

KCGImageAlphaNone |

KCGBitmapByteOrderDefault); // Bitmap info flags

 

CGContextDrawImage (contextRef, CGRectMake (0, 0, cols, rows), self. CGImage );

CGContextRelease (contextRef );

Cgcolorspacerelstrap (colorSpace );

 

Return cvMat;

}

 

+ (UIImage *) imageWithCVMat :( constcv: Mat &) cvMat

{

Return [[[UIImagealloc] initWithCVMat: cvMat] autorelease];

}

 

-(Id) initWithCVMat :( constcv: Mat &) cvMat

{

NSData * data = [NSDatadataWithBytes: cvMat. datalength: cvMat. elemSize () * cvMat. total ()];

 

CGColorSpaceRef colorSpace;

 

If (cvMat. elemSize () = 1)

{

ColorSpace = CGColorSpaceCreateDeviceGray ();

}

Else

{

ColorSpace = CGColorSpaceCreateDeviceRGB ();

}

 

CGDataProviderRef provider = CGDataProviderCreateWithCFData (CFDataRef) data );

 

CGImageRef imageRef = CGImageCreate (cvMat. cols, // Width

CvMat. rows, // Height

8, // Bits per component

8 * cvMat. elemSize (), // Bits per pixel

CvMat. step [0], // Bytes per row

ColorSpace, // Colorspace

KCGImageAlphaNone | kCGBitmapByteOrderDefault, // Bitmap info flags

Provider, // CGDataProviderRef

NULL, // Decode

False, // shoshould interpolate

KCGRenderingIntentDefault); // Intent

 

Self = [selfinitWithCGImage: imageRef];

CGImageRelease (imageRef );

CGDataProviderRelease (provider );

Cgcolorspacerelstrap (colorSpace );

 

Return self;

}

 

@ End



Well, the above are all preparations. The specific code is actually very simple.

 

 

Cv: Mat grayFrame, _ lastFrame, mask, bgModel, fgModel;

_ LastFrame = [self. preview. imageCVMat];

Cv: cvtColor (_ lastFrame, grayFrame, cv: COLOR_RGBA2BGR); // convert it to three-channel bgr.

 

Cv: Rect rectangle (, grayFrame. cols-2, grayFrame. rows-2); // detection range

// Segment the image

Cv: grabCut (grayFrame, mask, rectangle, bgModel, fgModel, 3, cv: GC_INIT_WITH_RECT); // openCv powerful graph deduction Function

 

Int nrow = grayFrame. rows;

Int ncol = grayFrame. cols * grayFrame. channels ();

For (int j = 0; j

For (int I = 0; I

Uchar val = mask. (J, I );

If (val = cv: GC_PR_BGD ){

GrayFrame. (J, I) [0] = ';

GrayFrame. (J, I) [1] = ';

GrayFrame. (J, I) [2] = ';

}

}

}

Cv: cvtColor (grayFrame, grayFrame, cv: COLOR_BGR2RGB); // convert to a color image

_ Preview. image = [[UIImagealloc] initWithCVMat: grayFrame]; // display the result

 

 

The above code is available for testing. In fact, the most critical code here is to use the grabCut image segmentation function of opencv.

 

The grabCut function API is described as follows:

Void cv: grabCut (InputArray _ img, InputOutputArray _ mask, Rect rect,

InputOutputArray _ bgdModel, InputOutputArray _ fgdModel,

Int iterCount, int mode)

/*

* *** Parameter description:

Img-the source image to be split. It must be an 8-bit, 3-channel (CV_8UC3) image and will not be modified during processing;

Mask: mask image. If mask is used for initialization, the mask stores the initialization mask information. When splitting, you can also save the foreground and background set for user interaction to the mask, then pass in the grabCut function. After processing, the mask will save the result. Mask can only take the following four values:

GCD_BGD (= 0), background;

GCD_FGD (= 1), foreground;

GCD_PR_BGD (= 2), possible background;

GCD_PR_FGD (= 3), possible prospects.

If GCD_BGD or GCD_FGD is not manually marked, only GCD_PR_BGD or GCD_PR_FGD are displayed;

Rect: used to limit the image range to be split. Only the image part in the rectangle window is processed;

BgdModel-background model. If it is null, the function automatically creates a bgdModel. bgdModel must be a single-channel floating-point (CV_32FC1) image with only one row and 13x5 columns;

FgdModel-foreground model. If it is null, the function automatically creates an fgdModel. The fgdModel must be a single-channel floating-point (CV_32FC1) image with only one row and 13x5 columns;

IterCount -- number of iterations, which must be greater than 0;

Mode -- used to indicate the operations performed by the grabCut function. The optional values include:

GC_INIT_WITH_RECT (= 0), uses a rectangular window to initialize GrabCut;

GC_INIT_WITH_MASK (= 1), uses the mask image to initialize GrabCut;

GC_EVAL (= 2.

*/

 


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.