IPhone dual BufferingMechanism is the content to be introduced in this article. I believe most people know that the so-called "ScreenDouble Buffering"Is to create a" graphics device context cache "in the memory, and all the drawing operations are performed on this" graphics context cache, update the image context to the screen device again.
The iPhone platform provides the following APIs:
- CGContextRef CGBitmapContextCreate (
- void *data,
- size_t width,
- size_t height,
- size_t bitsPerComponent,
- size_t bytesPerRow,
- CGColorSpaceRef colorspace,
- CGBitmapInfo bitmapInfo
- );
The meanings of each API parameter are as follows:
The parameter data points to the memory area rendered by the drawing operation. The memory area size should be bytesPerRow * height) bytes. If you do not have any special requirements for the rendering memory area, you can pass NULL to the date parameter.
The width parameter indicates the width of the rendered memory area.
The height parameter indicates the height of the rendered memory area.
The bitsPerComponent parameter is used to render the bits of components in the memory area on each pixel of the screen. For example, if the 32-bit pixel and RGB color format are used, in the RGBA color format, the bits required for each component on each pixel of the screen is 32/4 = 8.
The bytesPerRow parameter indicates the number of bytes used by each row in the rendered memory area.
The colorspace parameter is used for the bitmap context in the rendered memory area ".
The bitmapInfo parameter specifies whether the "View" of the rendered memory area contains an alpha perspective) channel and the corresponding position of each pixel. In addition, you can also specify whether the component type is a floating point value or an integer value.
From the interface definition, we can see that when calling this function, the system will create a "view rendering environment", which is a "view context" defined by the reader ". When the reader draws in this "view context", the system will render the rendering operation into bitmap data in the defined Rendering Memory area. The pixel format of "view context" is defined by three parameters, namely the bits, colorspace, and alpha perspective occupied by each component. The alpha value specifies the opacity of each pixel.
Based on the knowledge points described above, the author defines the rendered memory area as follows:
- imageData = malloc((iFrame.size.width)*(iFrame.size.height)*32);
The 32-bits is used to represent the RGBA color format in each pixel on the screen. The bitsPerComponent parameter is 32/4 = 8. The parameters are defined as follows:
- iDevice = CGBitmapContextCreate(imageData,iFrame.size.width,
- iFrame.size.height,8,32*(iFrame.size.width),iColorSpace,kCGImageAlphaPremultipliedLast);
Here, the method for getting iColorSpace is as follows:
- iColorSpace = CGColorSpaceCreateDeviceRGB();
- CGColorSpaceCreateDeviceRGB()
You can obtain the RGB Color Space unrelated to the device. The color space needs to be released by calling CGColorSpaceRelease.
After the view context "iDevice" is created for the successfully rendered memory area, the reader can draw the map context in the rendered memory area, as mentioned above, all the painting operations will be rendered as bitmap data in the rendered memory area. The painting operations are as follows:
- // Draw an image
- CGContextDrawImage (iDevice, CGRectMake (0, 0, iFrame. size. width, iFrame. size. height), aImage );
- // Draw a translucent rectangle
- CGRect rt;
- Rt. origin. x = 100;
- Rt. origin. y = 20;
- Rt. size. width = 200;
- Rt. size. height = 200;
- CGContextSaveGState (iDevice );
- CGContextSetRGBFillColor (iDevice, 1.0, 1.0, 1.0, 0.5 );
- CGContextFillRect (iDevice, rt );
- CGContextRestoreGState (iDevice );
- CGContextStrokePath (iDevice );
- // Draw a straight line
- CGContextSetRGBStrokeColor (iDevice, 1.0, 0.0, 0.0, 1.0 );
- CGPoint pt0, pt1;
- CGPoint points [2];
- Pt0.x = 10;
- Pt0.y = 250;
- Pt1.x = 310;
- Pt1.y = 250;
- Points [0] = pt0;
- Points [1] = pt1;
- CGContextAddLines (iDevice, points, 2 );
- CGContextStrokePath (iDevice );
It can be seen that in the "bitmap context" of the rendered memory area, various painting operations such as images, rectangles, and straight lines can be performed. These operations are rendered as bitmap data, you can obtain the rendered "bitmap" using the following method ":
- -(void)drawRect:(CGRect)rect {
- // Drawing code
- UIGraphicsGetCurrentContext();
- UIImage* iImage = [UIImage imageNamed:@"merry.png"];
- [iOffScreenBitmap DrawImage:iImage.CGImage];
- UIImage* iImage_1 = [UIImage imageWithCGImage:[iOffScreenBitmap Gc]];
- [iImage_1 drawInRect:CGRectMake(0, 0, 120, 160)];
- }
In the above Code, use the DrawImage: cgimageref of iOffScreenBitmap to draw the image merry.png to the screen.Double BufferingAnd then draw a rectangle and a straight line. Then, use the CGBitmapContextCreateImage: CGConotextRef method to obtain"ViewContextViewSnapshot) "image_1, and finally put this"ViewThe snapshot is updated to the screen to implement the screenDouble BufferingThe results are as follows:
SummaryIPhone dual BufferingThis article is helpful to you!