I recently made a UITableView example and found that the rolling performance is good. But when you scroll back and forth, the image displayed for the first time is not as smooth as the image displayed again, and there will be a slight pause before it appears.
So I guess the displayed image must have been cached. After checking the document, I found that it is true.
Later, I found some tips in other articles: when the image is displayed, decompression and re-sampling will consume a lot of CPU time. If the image is drawn in a bitmap context in advance, then cache the image to save the effort.
Then I wrote a subprogram for verification:
- // ImageView.h
-
- #import <UIKit/UIKit.h>
-
-
- @interface ImageView : UIView {
- UIImage *image;
- }
-
- @property (retain, nonatomic) UIImage *image;
-
- @end
- // ImageView.m
-
- #include <mach/mach_time.h>
- #import "ImageView.h"
-
-
- @implementation ImageView
-
- #define LABEL_TAG 1
-
- static const CGRect imageRect = {{0, 0}, {100, 100}};
- static const CGPoint imagePoint = {0, 0};
-
- @synthesize image;
-
- - (void)awakeFromNib {
- if (!self.image) {
- self.image = [UIImage imageNamed:@"random.jpg"];
- }
- }
-
- - (void)drawRect:(CGRect)rect {
- if (CGRectEqualToRect(rect, imageRect)) {
- uint64_t start = mach_absolute_time();
- [image drawAtPoint:imagePoint];
- uint64_t drawTime = mach_absolute_time() - start;
-
- NSString *text = [[NSString alloc] initWithFormat:@"%ld", drawTime];
- UILabel *label = (UILabel *)[self viewWithTag:LABEL_TAG];
- label.text = text;
- [text release];
- }
- }
-
- - (void)dealloc {
- [super dealloc];
- [image release];
- }
-
- @end
I will not list the Controller code. When I click a button, update view (call [self. view setNeedsDisplayInRect: imageRect]), draw a picture, and display the time consumed in the label.
It is worth mentioning that on the simulator, you can use the clock () function to obtain the precision in microseconds, but on iOS devices, the precision is 10 milliseconds. So I found mach_absolute_time (), which has a nanosecond precision on Mac and iOS devices.
The test uses a 300 x pixel JPEG image, which is named with @ 2x added. It takes about microseconds for the first display on iPhone 4 and 65 microseconds for the second display.
The next step is to witness the miracle and add this code to the program:
- static const CGSize imageSize = {100, 100};
-
- - (void)awakeFromNib {
- if (!self.image) {
- self.image = [UIImage imageNamed:@"random.jpg"];
- if (NULL != UIGraphicsBeginImageContextWithOptions)
- UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0);
- else
- UIGraphicsBeginImageContext(imageSize);
- [image drawInRect:imageRect];
- self.image = UIGraphicsGetImageFromCurrentImageContext();
- UIGraphicsEndImageContext();
- }
- }
Here, you need to determine whether uigraphicsbeginimagecontextwitexceptions is NULL because it is added to iOS 4.0.
Since the JPEG image is not transparent, the second parameter is set to YES.
The third parameter is the scaling ratio, iPhone 4 is 2.0, and others are 1.0. Although [UIScreen mainScreen]. scale can be used here, the system automatically sets the correct proportion after it is set to 0.
It is worth mentioning that the image itself also has a scaling ratio. The common image is 1.0 (except for UIImage imageNamed:, most APIs can only obtain this image, and the zoom ratio cannot be changed), the HD image is 2.0. The image points and pixels on the screen are calculated based on the scaling ratio of the two. For example, a normal image is on the retina display, while a high-definition image is on the retina display.
Next, drawInRect: draws the image to the current image context, and completes understanding of compression and re-sampling. Then obtain the new image from the image context. The scaling ratio of the image can also be correctly matched with the device.
Click the button again to find that the time has been reduced to about 12 microseconds, and the drawing is stable at around 15 microseconds.
Can it be faster? Let's try Core Graphics.
First, define a global CGImageRef variable:
- static CGImageRef imageRef;
Set its value in awakeFromNib:
- imageRef = self.image.CGImage;
Finally, draw in drawRect:
- CGContextRef context = UIGraphicsGetCurrentContext();
- CGContextDrawImage(context, imageRect, imageRef);
After running the task, we found that the time was increased to about 33 microseconds, and the image was upside down and upside down.
This is because the Y axis of the coordinate system of UIKit and Core Graphics is the opposite, so we can add two lines of code to fix it:
- CGContextRef context = UIGraphicsGetCurrentContext();
- CGContextTranslateCTM(context, 0, 100);
- CGContextScaleCTM(context, 1, -1);
- CGContextDrawImage(context, imageRect, imageRef);
It seems that the time is reduced to about 14 microseconds, and the result is not very effective. It seems that it is good to use-drawAtPoint: And-drawInRect: directly. Of course, the correct method for this example is viewDidLoad or loadView, but I am too lazy to list the Controller code, so I put it in awakeFromNib.