Image Processing in iOS (II) -- convolution

Source: Internet
Author: User

For convolution in image processing, here are two brief introductions: Wenyi and wen'er.

Among them, a possible convolution operation code is as follows:

[Cpp]
-(UIImage *) applyConvolution :( NSArray *) kernel
{
CGImageRef inImage = self. CGImage;
CFDataRef m_DataRef = CGDataProviderCopyData (CGImageGetDataProvider (inImage ));
CFDataRef m_OutDataRef = CGDataProviderCopyData (CGImageGetDataProvider (inImage ));
UInt8 * m_PixelBuf = (UInt8 *) CFDataGetBytePtr (m_DataRef );
UInt8 * m_OutPixelBuf = (UInt8 *) CFDataGetBytePtr (m_OutDataRef );

Int h = CGImageGetHeight (inImage );
Int w = CGImageGetWidth (inImage );

Int kh = [kernel count]/2;
Int kw = [[kernel objectAtIndex: 0] count]/2;
Int I = 0, j = 0, n = 0, m = 0;

For (I = 0; I For (j = 0; j <w; j ++ ){
Int outIndex = (I * w * 4) + (j * 4 );
Double r = 0, g = 0, B = 0;
For (n =-kh; n <= kh; n ++ ){
For (m =-kw; m <= kw; m ++ ){
If (I + n> = 0 & I + n

If (j + m> = 0 & j + m <w ){
Double f = [[[kernel objectAtIndex :( n + kh)] objectAtIndex :( m + kw)] doubleValue];
If (f = 0) {continue ;}
Int inIndex = (I + n) * w * 4) + (j + m) * 4 );
R + = m_PixelBuf [inIndex] * f;
G + = m_PixelBuf [inIndex + 1] * f;
B + = m_PixelBuf [inIndex + 2] * f;
}
}
}
}
M_OutPixelBuf [outIndex] = SAFECOLOR (int) r );
M_OutPixelBuf [outIndex + 1] = SAFECOLOR (int) g );
M_OutPixelBuf [outIndex + 2] = SAFECOLOR (int) B );
M_OutPixelBuf [outIndex + 3] = 255;
}
}

CGContextRef ctx = CGBitmapContextCreate (m_OutPixelBuf,
CGImageGetWidth (inImage ),
CGImageGetHeight (inImage ),
CGImageGetBitsPerComponent (inImage ),
CGImageGetBytesPerRow (inImage ),
CGImageGetColorSpace (inImage ),
CGImageGetBitmapInfo (inImage)
);

CGImageRef imageRef = CGBitmapContextCreateImage (ctx );
CGContextRelease (ctx );
UIImage * finalImage = [UIImage imageWithCGImage: imageRef];
CGImageRelease (imageRef );
CFRelease (m_DataRef );
CFRelease (m_OutDataRef );

Return finalImage;
}

The kernel Parameter of the method is the convolution kernel in convolution. Below are the convolution kernels of several filters:
[Cpp]
# Pragma mark-
# Pragma mark-Basic Convolutions
 
/* Reference:
* Http://docs.gimp.org/en/plug-in-convmatrix.html
*/
 
-(UIImage *) sharpen
{
// Double dKernel [5] [5] = {
// {0, 0.0,-1.0, 0.0, 0 },
// {0,-1.0, 5.0,-1.0, 0 },
// {0, 0.0,-1.0, 0.0, 0}
//};

Double dKernel [5] [5] = {
{0, 0.0,-0.2, 0.0, 0 },
{0,-0.2, 1.8,-0.2, 0 },
{0, 0.0,-0.2, 0.0, 0}
};

NSMutableArray * kernel = [[NSMutableArray alloc] initWithCapacity: 5] autorelease];
For (int I = 0; I <5; I ++ ){
NSMutableArray * row = [[NSMutableArray alloc] initWithCapacity: 5] autorelease];
For (int j = 0; j <5; j ++ ){
[Row addObject: [NSNumber numberWithDouble: dKernel [I] [j];
}
[Kernel addObject: row];
}
Return [self applyConvolution: kernel];
}
 
-(UIImage *) edgeEnhance
{
Double dKernel [5] [5] = {
{0, 0.0, 0.0, 0.0, 0 },
{0,-1.0, 1.0, 0.0, 0 },
{0, 0.0, 0.0, 0.0, 0}
};

NSMutableArray * kernel = [[NSMutableArray alloc] initWithCapacity: 5] autorelease];
For (int I = 0; I <5; I ++ ){
NSMutableArray * row = [[NSMutableArray alloc] initWithCapacity: 5] autorelease];
For (int j = 0; j <5; j ++ ){
[Row addObject: [NSNumber numberWithDouble: dKernel [I] [j];
}
[Kernel addObject: row];
}

Return [self applyConvolution: kernel];
}
 
-(UIImage *) edgeDetect
{
Double dKernel [5] [5] = {
{0, 0.0, 1.0, 0.0, 0 },
{0, 1.0,-4.0, 1.0, 0 },
{0, 0.0, 1.0, 0.0, 0}
};

NSMutableArray * kernel = [[NSMutableArray alloc] initWithCapacity: 5] autorelease];
For (int I = 0; I <5; I ++ ){
NSMutableArray * row = [[NSMutableArray alloc] initWithCapacity: 5] autorelease];
For (int j = 0; j <5; j ++ ){
[Row addObject: [NSNumber numberWithDouble: dKernel [I] [j];
}
[Kernel addObject: row];
}

Return [self applyConvolution: kernel];
}
 
-(UIImage *) emboss
{
Double dKernel [5] [5] = {
{0,-2.0,-1.0, 0.0, 0 },
{0,-1.0, 1.0, 1.0, 0 },
{0, 0.0, 1.0, 2.0, 0}
};

NSMutableArray * kernel = [[NSMutableArray alloc] initWithCapacity: 5] autorelease];
For (int I = 0; I <5; I ++ ){
NSMutableArray * row = [[NSMutableArray alloc] initWithCapacity: 5] autorelease];
For (int j = 0; j <5; j ++ ){
[Row addObject: [NSNumber numberWithDouble: dKernel [I] [j];
}
[Kernel addObject: row];
}

Return [self applyConvolution: kernel];
}

On this basis, I Google the simple steps for processing photos in Photoshop in black and white:
Color Removal
Adjust contrast
Gaussian blur
Embossed Effect
Edge Detection
Adjust contrast
Adjust brightness
Reverse Phase
I implemented the Code as follows:

[Cpp]
Return [[[[[originImage desaturate]
ChangeContrastByFactor: 1.5]
GaussianBlur: 1.3] emboss]
EdgeDetect]
ChangeContrastByFactor: 1.5]
ChangeBrightnessByFactor: 1.5]
Invert];

Unfortunately, the effect is a little rough. The photo still uses Andy in the previous article as an example:

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.