Many algorithms have long been implemented, but they are not implemented for various reasons. This bilinear interpolation rotating image is one of them.
Previously I wrote about the nearest interpolation rotating image and the portal. It works well when combined.
Clear all; close all; clc; jiaodu = 45; degrees of rotation to be rotated, the direction of rotation is instant needle img1_imread('lena.jpg '); % here V is the height of the original image, U is the width of the original image imshow (IMG ); % here Y is the height of the transformed image, X is the width of the transformed image [H w] = size (IMG); Theta = jiaodu/180 * PI; rot = [cos (theta)-sin (theta) 0; sin (theta) Cos (theta) 0; 0 0 1]; pix1 = [1 1 1] * rot; % coordinate pix2 = [1 W 1] * rot of the upper left point of the transformed image; % coordinate pix3 = [H 1 1] * rot of the upper right point of the transformed image; % coordinate pix4 = [h w 1] * rot; % coordinate Height = round (max ([ABS (pix1 (1) -pix4 (1) + 0.5 ABS (pix2 (1)-pix3 (1) + 0.5]); % width of the transformed image = round (max ([ABS (pix1 (2)-pix4 (2) + 0.5 ABS (pix2 (2)-pix3 (2 )) + 0.5]); % converted Image Width imgn = zeros (height, width); delta_y = ABS (min ([pix1 (1) pix2 (1) pix3 (1) pix4 (1)]); % get the offset of the negative axis exceeding the y direction delta_x = ABS (min ([pix1 (2) pix2 (2) pix3 (2) pix4 (2)]); % get the offset of the negative axis in the X direction beyond for I = 1-delta_y: height-delta_y for J = 1-delta_x: width-delta_x pix = [I J 1]/rot; % use the coordinates of the transformed image points to find the coordinates of the original image points. % otherwise, the pixels of some transformed images cannot be completely filled with float_y = pix (1) -Floor (pix (1); float_x = pix (2)-floor (pix (2); If pix (1)> = 1 & pix (2)> = 1 & pix (1) <= H & pix (2) <= W pix_up_left = [floor (pix (1) floor (pix (2)]; % four adjacent points pix_up_right = [floor (pix (1) Ceil (pix (2)]; pix_down_left = [Ceil (pix (1 )) floor (pix (2)]; pix_down_right = [Ceil (pix (1) Ceil (pix (2)]; value_up_left = (1-float_x) * (1-float_y ); % calculate the weights of neighboring four points value_up_right = float_x * (bytes); value_down_left = (1-float_x) * float_y; value_down_right = float_x * float_y; imgn (I + delta_y, J + delta_x) = value_up_left * IMG (pix_up_left (1), pix_up_left (2) +... value_up_right * IMG (pix_up_right (1), pix_up_right (2) +... value_down_left * IMG (pix_down_left (1), pix_down_left (2) +... value_down_right * IMG (pix_down_right (1), pix_down_right (2); End endfigure, imshow (uint8 (imgn ))
Source image
Nearest interpolation Rotation
Bilinear interpolation Rotation
Postscript:
The above cannot pass through the limit. If the rotation is 90 degrees or 180 degrees, the boundary will have black pixels. Modify as follows:
Main. m
Clear all; close all; clc; jiaodu = 90; degrees of rotation to be rotated, the direction of rotation is instant needle img1_imread('lena.jpg '); % here V is the height of the original image, U is the width of the original image imshow (IMG ); % here Y is the height of the transformed image, X is the width of the transformed image [H w] = size (IMG); Theta = jiaodu/180 * PI; rot = [cos (theta)-sin (theta) 0; sin (theta) Cos (theta) 0; 0 0 1]; pix1 = [1 1 1] * rot; % coordinate pix2 = [1 W 1] * rot of the upper left point of the transformed image; % coordinate pix3 = [H 1 1] * rot of the upper right point of the transformed image; % coordinate pix4 = [h w 1] * rot; % coordinate Height = round (max ([ABS (pix1 (1) -pix4 (1) + 0.5 ABS (pix2 (1)-pix3 (1) + 0.5]); % width of the transformed image = round (max ([ABS (pix1 (2)-pix4 (2) + 0.5 ABS (pix2 (2)-pix3 (2 )) + 0.5]); % converted Image Width imgn = zeros (height, width); delta_y = ABS (min ([pix1 (1) pix2 (1) pix3 (1) pix4 (1)]); % get the offset of the negative axis exceeding the y direction delta_x = ABS (min ([pix1 (2) pix2 (2) pix3 (2) pix4 (2)]); % get the offset imgm = img_extend (IMG, 1) of the negative axis in the X direction; % extend the boundary to get the image for I = 1-delta_y: height-delta_y for J = 1-delta_x: width-delta_x pix = [I J 1]/rot; % use the coordinates of the transformed image points to find the coordinates of the original image points, % otherwise, the pixels of some transformed images cannot be fully filled with float_y = pix (1)-floor (pix (1); float_x = pix (2) -Floor (pix (2); If pix (1)> =-1 & pix (2)> =-1 & pix (1) <= H + 1 & pix (2) <= W + 1 pix_up_left = [floor (pix (1) floor (pix (2)]; % four adjacent points pix_up_right = [floor (pix (1) Ceil (pix (2)]; pix_down_left = [Ceil (pix (1 )) floor (pix (2)]; pix_down_right = [Ceil (pix (1) Ceil (pix (2)]; value_up_left = (1-float_x) * (1-float_y ); % calculate the weights of neighboring four points value_up_right = float_x * (bytes); value_down_left = (1-float_x) * float_y; value_down_right = float_x * float_y; imgn (I + delta_y, J + delta_x) = value_up_left * imgm (pix_up_left (1) + 2, pix_up_left (2) + 2) +... value_up_right * imgm (pix_up_right (1) + 2, pix_up_right (2) + 2) +... value_down_left * imgm (pix_down_left (1) + 2, pix_down_left (2) + 2) +... value_down_right * imgm (pix_down_right (1) + 2, pix_down_right (2) + 2); End endfigure, imshow (uint8 (imgn ))
Img_extend.m
function imgm=img_extend(img,r) [m n]=size(img); imgm=zeros(m+2*r+1,n+2*r+1); imgm(r+1:m+r,r+1:n+r)=img; imgm(1:r,r+1:n+r)=img(1:r,1:n); imgm(1:m+r,n+r+1:n+2*r+1)=imgm(1:m+r,n:n+r); imgm(m+r+1:m+2*r+1,r+1:n+2*r+1)=imgm(m:m+r,r+1:n+2*r+1); imgm(1:m+2*r+1,1:r)=imgm(1:m+2*r+1,r+1:2*r);end