IOS development-automatic rotation of 90 degrees of images obtained by cameras
When I wrote a demo today, I found that if I directly perform operations on the image obtained through the camera, such as cropping and scaling, the original image will be rotated 90 degrees.
At first, I felt at a loss. Later, Baidu found a solution.
Ps: During the search process, I came across a saying:
// Get original photo from iOS photos // if the image is larger than 2 MB, it is automatically rotated 90 degrees; otherwise, UIImage * originalImg = [dict objectForKey: UIImagePickerControllerOriginalImage] is not rotated;
You are not sure whether it is correct. Mark first.
The following solution is feasible in the test. Original article: http://www.cnblogs.com/jiangyazhou/archive/2012/03/22/2412343.html
The image taken by the camera contains EXIF information. The imageOrientation attribute of UIImage refers to the orientation information in EXIF.
If we ignore the orientation information and directly perform pixel processing or drawInRect operations on the photo, the result will be displayed after turning or rotating 90. This is because after pixel processing or drawInRect operations, the imageOrientaion information is deleted, and imageOrientaion is reset to 0, resulting in mismatch between the photo content and imageOrientaion.
Therefore, before processing the image, first rotate the image to the correct direction, and the returned imageOrientaion is 0.
The following method is a method in UIImage category, which can be used to achieve the above purpose.
- (UIImage *)fixOrientation:(UIImage *)aImage { // No-op if the orientation is already correct if (aImage.imageOrientation == UIImageOrientationUp) return aImage; // We need to calculate the proper transformation to make the image upright. // We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored. CGAffineTransform transform = CGAffineTransformIdentity; switch (aImage.imageOrientation) { case UIImageOrientationDown: case UIImageOrientationDownMirrored: transform = CGAffineTransformTranslate(transform, aImage.size.width, aImage.size.height); transform = CGAffineTransformRotate(transform, M_PI); break; case UIImageOrientationLeft: case UIImageOrientationLeftMirrored: transform = CGAffineTransformTranslate(transform, aImage.size.width, 0); transform = CGAffineTransformRotate(transform, M_PI_2); break; case UIImageOrientationRight: case UIImageOrientationRightMirrored: transform = CGAffineTransformTranslate(transform, 0, aImage.size.height); transform = CGAffineTransformRotate(transform, -M_PI_2); break; default: break; } switch (aImage.imageOrientation) { case UIImageOrientationUpMirrored: case UIImageOrientationDownMirrored: transform = CGAffineTransformTranslate(transform, aImage.size.width, 0); transform = CGAffineTransformScale(transform, -1, 1); break; case UIImageOrientationLeftMirrored: case UIImageOrientationRightMirrored: transform = CGAffineTransformTranslate(transform, aImage.size.height, 0); transform = CGAffineTransformScale(transform, -1, 1); break; default: break; } // Now we draw the underlying CGImage into a new context, applying the transform // calculated above. CGContextRef ctx = CGBitmapContextCreate(NULL, aImage.size.width, aImage.size.height, CGImageGetBitsPerComponent(aImage.CGImage), 0, CGImageGetColorSpace(aImage.CGImage), CGImageGetBitmapInfo(aImage.CGImage)); CGContextConcatCTM(ctx, transform); switch (aImage.imageOrientation) { case UIImageOrientationLeft: case UIImageOrientationLeftMirrored: case UIImageOrientationRight: case UIImageOrientationRightMirrored: // Grr... CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.height,aImage.size.width), aImage.CGImage); break; default: CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.width,aImage.size.height), aImage.CGImage); break; } // And now we just create a new UIImage from the drawing context CGImageRef cgimg = CGBitmapContextCreateImage(ctx); UIImage *img = [UIImage imageWithCGImage:cgimg]; CGContextRelease(ctx); CGImageRelease(cgimg); return img;}