Adding boundaries to the source image
Cv2.copymakeborder (Src,top, bottom, left, right, Bordertype,value)
SRC: Source image
Top,bottem,left,right: Indicates the length of the boundary in four directions, respectively
Bordertype: Type of boundary
There are several types of the following:
Border_reflicate # Fill directly with the color of the border, AAAAAA | abcdefg | ggggborder_reflect # reflection, ABCDEFG | Gfedcbamn | NMA BCDBORDER_REFLECT_101 # reflected, similar to above, but when reflected, will open the border, ABCDEFG | Egfedcbamne | nmabcdborder_wrap # amount. Similar to this way ABCDF | MMABCDF | Mmabcdborder_constant # constants, added variables are all value color [Value][value] | abcdef | [Value] [Value] [value]
Value: Only constant-type boundaries make sense
The code and results are as follows:
ImportNumPy as NPImportCv2 fromMatplotlibImportPyplot as Pltred=[255, 0,0]img= Cv2.imread ('/home/zh/pic/3.png') IMG1= Cv2.resize (img, (0,0), fx=0.5, fy=0.5) Replicate= Cv2.copymakeborder (IMG1, 10,100,100,100, Cv2. border_replicate) reflect= Cv2.copymakeborder (IMG1, 100,10,100,100, Cv2. Border_reflect) reflect101= Cv2.copymakeborder (IMG1, 100,100,10,100, Cv2. BORDER_REFLECT_101) Wrap= Cv2.copymakeborder (IMG1, 100,100,100,10, Cv2. Border_wrap) Constant= Cv2.copymakeborder (IMG1, 100,100,100,100, Cv2. Border_constant, value=RED) Plt.subplot (231), Plt.imshow (IMG1), Plt.title ('ORIGINAL') Plt.subplot (232), Plt.imshow (replicate), Plt.title ('REFLECT') Plt.subplot (233), plt.imshow (reflect), Plt.title ('REFLECT') Plt.subplot (234), Plt.imshow (reflect101), Plt.title ('reflect_101') Plt.subplot (235), Plt.imshow (Wrap), Plt.title ('WRAP') Plt.subplot (236), Plt.imshow (constant), Plt.title ('CONSTANT') plt.show ()
Geometric transformations of images:
The common geometric transformations are scaling, affine, and perspective transformations, which can be accomplished by the following functions.
DST ==
The first is the scaling transform cv2.resize ()
There are 2 non-keyword parameter groups: Src,dsize, which are the dimensions of the source image and the scaled image, respectively.
Keyword parameter is dst,fx,fy,interpolation
DST is the scaled image, and fx,fy is the scale of the image x, y direction,
Interplolation is the interpolation method when zooming, there are three kinds of interpolation methods:
Cv2. Inter_area # uses pixel relationship resampling. This method avoids ripples when the image shrinks. When the image is enlarged, it is similar to the Cv_inter_nn method Cv2. Inter_cubic # cubic interpolation cv2. Inter_linear # Double linear interpolation
Cv2. Inter_nn # Nearest neighbor interpolation
Affine transformation Cv2.warpaffine ()
The non-keyword parameters are src, M, dsize, respectively, representing the source image, the transformation matrix, and the length-width of the transformed image.
Here, let's talk about the transformation matrix of the radiation transformation
The Displacement Transformation matrix is:
Rotation transformation matrix:
The standard rotation transformation matrix is
, but the matrix does not take into account the rotation transformation when the displacement and scaling operation, the rotation in the OPENCV is as follows:
, where
A function is provided in OPENCV to obtain such a matrix
m=cv2.getrotationmatrix2d (Rotate_center, degree, scale)
The Rotate_center is a 2-tuple tuple that represents the rotation center coordinate, degree represents the angle of counterclockwise rotation, and scale scales the scale
Affine transformation matrices:
Perspective Transform Cv2.warpperspective ()
Non-keyword parameters src, M, dsize represent the source image, the transformation matrix, and the size of the output image, respectively
Keyword parameters for flags, Bordermode, Bordervalue, these parameters of the meaning of understanding is not very clear, you can go to
Http://docs.opencv.org/modules/imgproc/doc/geometric_transformations.html Find Warpperspective function
The perspective transformation matrix is generally not easy to know directly, can directly know the position of the point before and after the transformation, therefore, OPENCV provides the Getpersepectivetransform () function to obtain the perspective transformation matrix
M = Cv2.getperspectivetransform (Pts1, pts2)
Pts1,pts2 is the position of the transition point and the position of the transformed point.
(In fact, all transformations of the transformation matrix can be obtained by the coordinates of the transformation before and after the point, that is, through the above function, because all transformations are the special case of perspective transformation)
Finally, an example is used to present the functions of the transformation function as follows:
ImportNumPy as NPImportCv2 fromMatplotlibImportPyplot as Plt#Scaling:img = Cv2.imread ('/home/zh/pic/3.png') rows, cols, channels=Img.shaperes= Cv2.resize (img, COLS/2, ROWS/2))#Translation:#1.shiftM_shift = Np.float32 ([[1,0,100],[0,1,50]]) Img_shift=cv2.warpaffine (IMG, M_shift, (cols, rows))#2.rotateM_rotate = cv2.getrotationmatrix2d ((COLS/2, ROWS/2), 90, 1) Img_rotate=cv2.warpaffine (IMG, M_rotate, (cols, rows))#3.affinePts1 = Np.float32 ([[50,50],[200,50],[50,200]]) pts2= Np.float32 ([[10,100],[200,50],[100,250]]) m_affine=cv2.getaffinetransform (pts1,pts2) img_affine=cv2.warpaffine (IMG, M_affine, (cols, rows))#4.perspectivePTS3 = Np.float32 ([[56,65],[368,52],[28,387],[389,390]]) pts4= Np.float32 ([[0,0],[300,0],[0,300],[300,300]]) m_perspective=cv2.getperspectivetransform (PTS3,PTS4) img_perspective=cv2.warpperspective (IMG, m_perspective, (cols, rows))Print 'shift:\n', M_shiftPrint 'rotate:\n', M_rotatePrint 'affine:\n', M_affinePrint 'perspective:\n', M_perspectiveplt.subplot (231), Plt.imshow (IMG), Plt.title ('src') Plt.subplot (232), Plt.imshow (res), Plt.title (' Scale') Plt.subplot (233), Plt.imshow (Img_shift), Plt.title ('Shift') Plt.subplot (234), Plt.imshow (img_rotate), Plt.title ('Rotate') Plt.subplot (235), Plt.imshow (Img_affine), Plt.title ('affine') Plt.subplot (236), Plt.imshow (img_perspective), Plt.title ('Perspective') plt.show ()
The results are as follows: