a brief introduction to geometric transformationUnlike point operations, it can
ChangeIn the image
the spatial relationship between pixels, this operation can be regarded as
PixelIn the image
MoveThe process. Definition: g (x,y) =f (x,y) =f[a (x,y), B (X,y)] g (x, Y) = f (x, y) = F[a (x, y), B (x, y)], where F (x,y) f (x, y) represents the input image, G (x,y) g (x, y) represents the output image, A (X,y) a (x, y) and B (x,y) b (x, y) represent space transformations, and if they are contiguous, the image is persisted
ConnectedRelationship. The method of realizing the geometric operation: pixel filling method, the pixel's gray level by
Insert ValueAlgorithm decision. Two main functions: the cv2.warpaffine receives the 2∗3 2*3 transformation matrix, and the cv2.warpperspective receives the 3∗3 3*3 transformation matrix.
Second, zoom (scaling)Change the size of an image
img = cv2.imread (' messi5.jpg ')
# The following none should be the size of the output image, but since we set the scaling factor, here is the none
res = Cv2.resize (img, none, fx= 2, fy=2, interpolation = Cv2. inter_cubic)
# or, here, we set the size of the output image directly, so we don't have to set the zoom factor
height, width = img.shape[:2]
res = Cv2.resize (img, 2* width, 2*height), interpolation = Cv2. Inter_cubic)
# We recommend using CV2 when scaling. Inter_area, we recommend the use of V2 when extending. Inter_cubic (slow) and V2. Inter_linear
# The interpolation method used by default for all operations that change the size of the image is Cv2. Inter_linear
Iii. Translation (shifting)The vacated portions will be populated with pixel 0来, and the parts removed from the area will be shed. If you want to move along the (X,y) (x, y) direction, the distance to move is (tx,ty) (t_x, t_y), you can construct the move matrix M m in the following way:
[1001txty] \begin{bmatrix {1 & 0 & t_x \ 0 & 1 & t_y \end{bmatrix}
M = Np.float32 ([[[1,0,100],[0,1,50]])
DST = Cv2.warpaffine (IMG, M, (cols,rows)) Warning:third argument of the
Cv2 The. Warpaffine () function is the size of the output image, which should was in the form of (width, height). Remember width = number of columns, and height = number of rows.
four, rotation (rotation)Need to specify
Rotation Center, rotation angle, after rotation of the scaling factorTo get a 2∗3 2*3 rotation matrix and then call Cv2.warpaffine () to output the rotated image.
img = cv2.imread (' messi5.jpg ', 0)
rows,cols = Img.shape
# The first parameter here is the rotation center, the second is the rotation angle, and the third is the rotated scaling factor
# You can prevent a problem that is beyond the bounds of rotation by setting the rotation center, scaling factor, and window size
M = cv2.getrotationmatrix2d ((COLS/2,ROWS/2), 1)
# The third parameter is the size of the output image
DST = Cv2.warpaffine (IMG, M, (cols,rows))
v. Affine transformation (affine transformation)It belongs to the photographic geometric transformation and is used for image registration. As a comparison or matching preprocessing process, the calculation method is the product of the coordinate vector and the transformation matrix, in other words, the matrix operation in the affine transformation, all the parallel lines in the original image are parallel in the result images, it is based on 3 fixed vertices of the transformation to find the
Transformation Matrix, we need
three pointsFrom
Input Imageand their corresponding locations in
Output ImageThen Cv2.getaffinetransform'll create a 2∗3 2*3 matrix which is to being passed to Cv2.warpaffine
img = cv2.imread (' drawing.png ')
rows,cols,ch = Img.shape
# Specifies three points in the original artwork and their position in the output picture
pts1 = Np.float32 ([[50, [$], [$]])
pts2 = Np.float32 ([[[+], [+], [MB]])
M = Cv2.getaffinetransform (Pts1, PTS2)
DST = Cv2.warpaffine (IMG, M, (cols,rows))
# The third parameter represents the size of the output image
Vi. Perspective Transformation (Perspective Transformation)Projecting from a face view to a bird's eye view helps to eliminate the perspective transformation that is nearly large and small in the picture requires a 3∗3 3*3 mapping matrix M M, which can be directly used by OPENCV provided function cv2.getperspectivetransform () to solve the matrix, but requires
Specify 4 on the input image