OpenCV is one of the most widely used open source visual libraries. He allows you to use very little code to detect a person's face in a picture or video.
Here are some tutorials on the internet to illustrate how to use affine transformations (affine transform) in OpenCV to rotate pictures--they don't handle rotating a rectangle in a picture usually cuts off the corners of the rectangle, so the resulting picture needs to be modified. This is a bit of a flaw when using a bit of code correctly.
def rotate_about_center (src, Angle, scale=1.): W = src.shape[1] h = src.shape[0] Ran GLE = Np.deg2rad (angle) # angle in radians # now calculate new image width and height NW = (ABS (Np.sin (Rangle) *h) + AB S (Np.cos (rangle) *w) *scale NH = (ABS (Np.cos (Rangle) *h) + ABS (Np.sin (Rangle) *w)) *scale # Ask OpenCV for the rotation ma Trix Rot_mat = cv2.getrotationmatrix2d ((nw*0.5, nh*0.5), angle, scale) # Calculate the ' move ' from the ' old ' center to the New Center combined # with the rotation rot_move = Np.dot (Rot_mat, Np.array (nw-w) *0.5)) # The Move Only affects the "translation, so Update" translation # part of the transform rot_mat[0,2] + = rot_move[0] Rot_mat [1,2] + = rot_move[1] return cv2.warpaffine (SRC, rot_mat, int (Math.ceil (NW)), Int (Math.ceil (NH)), Flags=cv2. INTER_LANCZOS4)
From the center of the original image to the center of the target image, the affine transformation in the rotation must be combined with the translation of the affine transformation. An affine transformation in a plane (2D) is a 2x2 matrix A and a translational vector-it gets the original point P = (x,y) to the target: AP + A. Combined two times transform AP + A and bp+b, do a then B, then get B (AP + a) + B--another with matrix BA and Vector b Affine transformation of a + B.
In this case, we are merging the rotation function with the translation. The translation of a similar transformation has the characteristics of 2x2 matrix I and motion vector m, so, with IP + m, we want to first pan to the new center and rotate it after dinner, so that we rotate RP + R after applying IP + M, which produces RP + Rm + R, which explains why we have to add only two coefficients.
PS: Sadly, if numpy the input data as a vector, rather than a matrix, it explains that the multiplication operator is not a matrix multiplication, so we have to explicitly write to Np.dot.
PS: We use Lanczos interpolation, which is generally beneficial to the expansion of the scale is very small disadvantage; Considering the application, we should adapt this interpolation.
PS: The interaction with Python is improved by the CV2 module, but because the numpy coordinates are different from the OPENCV, there are inevitably some perfect places. In addition, for some reason, OpenCV always use the units as degrees rather than radians, and so on. In the case of NumPy, the coordinates in the image array are accessed in the order of [y,x], such as increasing vertically downward and then increasing horizontally to the right. In the case of OPENCV, the size is expressed in terms of (width, height), and the order is reversed.