Translated from obtaining pixel data from a UIImage
UIImage is a very familiar data structure in iOS, which is very handy for storing images. In the OpenCV class library, where the image is stored using a different data structure, we have a problem: how can we change the uiimage into a data structure that other class libraries can recognize?
Although different image processing libraries have different data structures to handle pictures, there is a structure that is recognized by all the image processing libraries, which is the raw format, also known as the native format. In the raw format, each pixel is represented by an array of unsigned byte, which stores the RGBA value or grayscale value of the pixel. If you are dealing with video formats, you may need to use YUV, otherwise you will need to use RGBA values or grayscale values, which are stored in an array. You might think, how can I get the pixel information conveniently from the uiimage? Ah ha, you are blessed. I have implemented a category to solve this problem, as follows:
Uiimage+pixels.h
Uiimage+pixels.m
You can use the following code to get the grayscale value for each pixel:
for(int i=0;i<self.size.height;i++)
{ for(int y=0;y<self.size.width;y++)
{ NSLog(@"0x%X",pixelData[(i*((int)self.size.width))+y]);
}
}
Since 0x00 represents no brightness, and 0xFF represents the maximum brightness, for black pictures, the return is a series of 0x00, for the white photo, the return is a series of 0xFF.
You can get the RGBA value for each pixel using the following code:
for(int i=0;i<self.size.height;i++)
{
for(int y=0;y<self.size.width;y++)
{
unsigned char r = pixelData[(i*((int)self.size.width)*4)+(y*4)];
unsigned char g = pixelData[(i*((int)self.size.width)*4)+(y*4)+1];
unsigned char b = pixelData[(i*((int)self.size.width)*4)+(y*4)+2];
unsigned char a = pixelData[(i*((int)self.size.width)*4)+(y*4)+3];
NSLog(@"r = 0x%X g = 0x%X b = 0x%X a = 0x%X",r,g,b,a);
}
}
For the red picture, you will get the following result: r = 0xFF g = 0x0 B = 0x0 A = 0xFF
For the blue picture, you will get the following result: r = 0x0 g = 0x0 B = 0xFF A = 0xFF
For each pixel, the RGBA value needs to be represented by red, green, blue, and alpha four values. For the alpha value, you may be puzzled that this value is the opacity of the logo image, 0x00 is 100% transparent, and 0xFF is 100% opaque. I found that in most picture processing, you'd better ignore the alpha value.
Let's discuss how to remove pixels from uiimage for a particular coordinate.
For grayscale values, because each pixel is represented by only one value, we can think of a picture's pixel layout as a table, each pixel has its own index, for the first pixel its index is (0, 0), the second Pixel index is (1, 0). Use the following code to get the grayscale value of the corresponding coordinates of the pixel:
int x = 10;
int y = 2;
unsigned char pixel = pixels[(y*((int)whiteImage.size.width))+x];
For RGBA values, the rationale is the same as the grayscale values above, but one thing to note is that the Rgba value requires four bytes to come and go to represent a pixel.
int x = 10;
int y = 2;
unsigned char pixel = pixels[(y*((int)whiteImage.size.width)*4)+(x*4)];
If you see this, it means that you have been able to get the pixel information from the uiimage, congratulations! But finally, there is something very important that you need to be aware of. After you use these methods to give your data, you need to manually go to the free data, otherwise there will be a memory leak. Maybe some people will wonder why I set the return value to (unsigned char*) instead of a nsarray, the answer is simple, the direct operation (unsigned char*) is much faster than the Operation Nsarray, And my method by default is for those students who are very demanding on the speed of operation to provide.
How to get pixel information from UIImage in Objective-c