"Face change" with 200 lines of Python code

Source: Internet
Author: User
Tags python script

Introduced

This article will show you how to write a Python script with only 200 lines and a "face change" for the characters in two portraits.

This process can be divided into four steps:

    • Detects facial markers.
    • Rotate, scale, and transform the second image to fit the first image.
    • Adjusts the color balance of the second image so that it matches the first one.
    • The characteristics of the second image are mixed in the first image.

The complete source code can be downloaded from here: https://github.com/matthewearl/faceswap/blob/master/faceswap.py

1. Use Dlib to extract facial markers

The script uses Dlib's Python bindings to extract face markers:

Using Dlib to realize the algorithm in the paper one millisecond face Alignment with an Ensemble of Regression trees (http://www.csc.kth.se/~vahidk/papers/ Kazemicvpr14.pdf, author of Vahid Kazemi and Josephine Sullivan). The algorithm itself is very complex, but the Dlib interface is very simple to use:

predictor_path =  "/home/matt/ Dlib-18.16/shape_predictor_68_face_landmarks.dat "detector = dlib.get_frontal_face_detector () Predictor = Dlib.shape _predictor (Predictor_path) def  Get_landmarks (IM): rects = Detector (IM, 1) if len (rects) >  1: raise toomanyfaces if len (rects) = =  0: raise NoFaces return numpy.matrix ([[P.x, P.y] for p in Predictor (IM, Rects[0]). Parts ()])        

The Get_landmarks () function transforms an image into an numpy array and returns a x2 element matrix, each of which corresponds to an x, Y coordinate for each line of the input image.

The feature extractor (predictor) takes a coarse bounding box as an algorithmic input, provided by the traditional face detector (detector) that returns a rectangular list, each of which corresponds to a face in the image.

In order to build the feature extractor, the pre-training model is essential and the relevant model can be downloaded from the Dlib SourceForge library (http://sourceforge.net/projects/dclib/files/dlib/v18.10/shape_ PREDICTOR_68_FACE_LANDMARKS.DAT.BZ2).

2. Adjusting the face with Platts (Procrustes analysis)

Now that we have two marker matrices, each row has a set of coordinates corresponding to a specific facial feature (such as the coordinates of the nose given by line 30th). We now have to figure out how to rotate, translate, and scale the first vector so that they fit as much as possible to the point of the second vector. The idea is that you can overwrite the second image with the same transform on the first image.

Make them more mathematical, look for T,s and R, and make the following expression the smallest result:

R is a 2 x2 orthogonal matrix, S is a scalar, T is a two-dimensional vector, and pi and Qi are the lines of the above labeled matrices.

It turns out that such problems can be solved by using the "General Platts Analysis" (ordinary Procrustes analyses):

def transformation_from_points (Points1, points2): Points1 = Points1. Astype (NumPy. float64) Points2 = points2. Astype (NumPy. float64) C1 = NumPy. Mean (Points1, axis=0) c2 = NumPy. Mean (points2, axis=0) Points1-= C1 Points2-= C2 S1 = numpy. STD (points1) s2 = numpy. Std (points2) points1/= s1 points2/= S2 U, S, Vt = NumPy. Linalg.SVD (Points1. T * points2) R = (U * Vt). T return numpy. Vstack ([NumPy. Hstack ((S2/S1) * R, C2. T-(S2/S1) * R * C1. T)), NumPymatrix ([0., 0., 1.]))            

The code implements the following steps, respectively:

    1. Converts the input matrix to a floating-point number. This is necessary for subsequent steps.
    2. Each point set subtracts its moment heart. Once an optimal scaling and rotation method is found for these two new point sets, the two moment Hearts C1 and C2 can be used to find the complete solution.
    3. Again, each point set is divided by its standard deviation. This eliminates the problem of component scaling deviations.
    4. Use singular Value decomposition to calculate the rotation section. You can see the details of solving the orthogonal Platts problem (Https://en.wikipedia.org/wiki/Orthogonal_Procrustes_problem) on Wikipedia.
    5. The affine transformation matrix (https://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations) is used to return the complete transformation.

After that, the result can be inserted into OpenCV's cv2.warpaffine function, which maps image two to image one:

def warp_im(im, M, dshape):    output_im = numpy.zeros(dshape, dtype=im.dtype)    cv2.warpAffine(im,                   M[:2], (dshape[1], dshape[0]), dst=output_im, borderMode=cv2.BORDER_TRANSPARENT, flags=cv2.WARP_INVERSE_MAP) return output_im

The image alignment results are as follows:

3. Correct the color of the second image

If we try to cover facial features directly, we'll soon see a problem:

The different skin tones and light between the two images cause the edges of the coverage area to be discontinuous. We try to fix:

Colour_correct_blur_frac =0.6left_eye_points = List (range (42,) right_eye_points = List (range (36,42))DefCorrect_colours  (IM1, IM2, landmarks1): Blur_amount = Colour_correct_blur_frac * Numpy.linalg.norm ( Numpy.mean (landmarks1[left_eye_points], Axis=0)-Numpy.mean (landmarks1[RIGHT_EYE_ POINTS], Axis=0)) blur_amount = Int (blur_amount) if blur_amount % 2 = = 0:blur_amount + = 1 Im1_blur = Cv2 . Gaussianblur (IM1, (Blur_amount, Blur_amount), 0) Im2_blur = Cv2. Gaussianblur (IM2, (Blur_amount, Blur_amount), 0) # Avoid Divide-by-zero errors. Im2_blur + = 128 * (Im2_blur <= 1.0) return (Im2.astype (numpy.float64) * Im1_blur.astype (Numpy.float64)/Im2_blur.astype ( Numpy.float64)             

The result is this:

This function attempts to change the color of the image 2 to match the image 1. It divides the Gaussian blur of the im2 by dividing it by the im2, and then multiplies the Gaussian blur of IM1. The idea here is to use RGB to scale the color, but not with the overall constant scale factor of all the images, each pixel has its own local scale factor.

In this way, the difference in light between two images can only be corrected to some extent. For example, if the image 1 is illuminated from the side, but the image 2 is evenly illuminated, the color correction after the image 2 will also appear not illuminated edges of the phenomenon.

In other words, this is a fairly coarse approach, and the key to solving the problem is a proper Gaussian kernel size. If it is too small, the face feature of the first image is displayed in the second image. Too large, the pixels outside the kernel are overwritten and discoloration occurs. The core here uses a 0.6 * pupil distance.

4. Mix the properties of the second image in the first image

Use a mask to select which parts of image 2 and image 1 should be the final displayed image:

A value of 1 (white) is the area where image 2 should be displayed, and a value of 0 (black) is where image 1 should be displayed. Values between 0 and 1 are mixed areas of image 1 and image 2.

This is the code that generates the diagram above:

left_eye_points = List (range (42,) right_eye_points = List (range (36,) left_brow_points = List (range (22,) right_brow_points = List (range (17,) nose_points = List (range (27,) mouth_points = List (range (48,)) Overlay_points = [left_eye_points + right_eye_points + left_brow_points + right_brow_points, NOSE_POINTS + MOUTH_POIN Ts,]feather_amount =11DefDraw_convex_hull(IM, points, color): points = Cv2.convexhull (points) cv2.fillconvexpoly (IM, points, color=color)DefGet_face_mask (IM, landmarks): im = Numpy.zeros (im.shape[:2], dtype= Numpy.float64) for group in overlay_points:draw_convex_hull ( IM, Landmarks[group], Color=1) im = Numpy.array ([IM, IM, im]). Transpose ((1, 2, 0)) Im = (cv2. Gaussianblur (IM, (Feather_amount, Feather_amount), 0) > 0) * Span class= "Hljs-number" >1.0 im = Cv2. Gaussianblur (IM, (Feather_amount, Feather_amount), 0) return Immask = Get_face_mask (im2, landmarks2) Warped_mask = Warp_im (Mask, M, im1.shape) combined_mask = Numpy.max ([get_face_ Mask (im1, landmarks1), Warped_mask], Axis=0)       

We decompose the above code:

    • Get_face_mask () is defined as creating a mask for an image and a marker matrix, which draws two white convex polygons: one is the area around the eye, and one is the area around the nose and mouth. It then extends from 11 pixels to the outer edge of the matte, helping to hide any discontinuous regions.

    • Such a mask is also generated for both images, using the same transformation as in step 2 to convert the matte of image 2 into the coordinate space of image 1.

    • After that, the two masks are combined into one by a element-wise maximum value. The combination of these two masks is to ensure that the image 1 is masked, and the image 2 features.

Finally, the mask is applied to give the final image:

output_im = im1 * (1.0 - combined_mask) + warped_corrected_im2 * combined_mask

Original link: http://matthewearl.github.io/2015/07/28/switching-eds-with-python/

"Face change" with 200 lines of Python code

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.