4th Course-convolutional Neural Networks-fourth Zhou (image style conversion)

Source: Internet
Author: User
0-Background

The so-called style conversion is based on a content image and a style image, merging the two, creating a new image that combines both contents and style.
The required dependencies are as follows:

Import OS
import sys
import scipy.io
import scipy.misc
import Matplotlib.pyplot as Plt
from Matplotlib.pyplot import imshow from
PIL import Image from
nst_utils import *
import NumPy as NP
import te Nsorflow as TF

%matplotlib inline
1-transfer Learning

Migration learning is the application of learning results from other tasks to a new task. Neural Style Transfer (NST) is based on the convolutional network model that has been trained for other tasks.
We are using the VGG network, which is based on a large number of imagenet database training, learning to a lot of high-level and low-level features.
Model loading:

Model = Load_vgg_model ("Pretrained-model/imagenet-vgg-verydeep-19.mat")
the print (model)
#注: it can be Www.vlfeat.org/matconvnet/models/beta16/imagenet-vgg-verydeep-19.mat download To, some big, about 500MB

Output information:

{' Conv5_1 ': <TF. Tensor ' relu_12:0 ' shape= (1, N, +) Dtype=float32>, ' conv4_1 ': <TF. Tensor ' relu_8:0 ' shape= (1, N, +) Dtype=float32>, ' avgpool1 ': <TF. Tensor ' avgpool:0 ' shape= (1, N, a, a) dtype=float32>, ' Conv4_3 ': <TF. Tensor ' relu_10:0 ' shape= (1, N, +) Dtype=float32>, ' conv2_1 ': <TF. Tensor ' relu_2:0 ' shape= (1,------) dtype=float32>, ' Conv5_3 ': <TF. Tensor ' relu_14:0 ' shape= (1, N, +) Dtype=float32>, ' input ': <TF. Variable ' variable:0 ' shape= (1, 3) dtype=float32_ref>, ' avgpool2 ': <TF. Tensor ' avgpool_1:0 ' shape= (1,----) dtype=float32>, ' Conv3_4 ': <TF. Tensor ' relu_7:0 ' shape= (1,-----) dtype=float32>, ' Conv5_2 ': <TF. Tensor ' relu_13:0 ' shape= (1, N, +) Dtype=float32>, ' Conv3_1 ': <TF. Tensor ' relu_4:0 ' shape= (1,-----) dtype=float32>, ' conv3_2 ': <TF. Tensor ' relu_5:0 ' shape= (1,-----) dtype=float32>, ' avgpool3 ': <TF. Tensor ' avgpool_2:0 'Shape= (1, Dtype=float32&gt, N.), and ' Conv3_3 ': <TF. Tensor ' relu_6:0 ' shape= (1,-----) dtype=float32>, ' Conv5_4 ': <TF. Tensor ' relu_15:0 ' shape= (1, N, +) Dtype=float32>, ' conv1_1 ': <TF. Tensor ' relu:0 ' shape= (1, +, +) Dtype=float32>, ' conv4_2 ': <TF. Tensor ' relu_9:0 ' shape= (1, N, +) Dtype=float32>, ' avgpool5 ': <TF. Tensor ' avgpool_4:0 ' shape= (1, ten, +) Dtype=float32>, ' conv4_4 ': <TF. Tensor ' relu_11:0 ' shape= (1, N, +) Dtype=float32>, ' conv2_2 ': <TF. Tensor ' relu_3:0 ' shape= (1,------) dtype=float32>, ' conv1_2 ': <TF. Tensor ' relu_1:0 ' shape= (1, +, +) Dtype=float32>, ' Avgpool4 ': <TF. Tensor ' avgpool_3:0 ' shape= (1, +, +) Dtype=float32>}

The model is stored in a dictionary, where the key is the variable name, and the corresponding value is the variable value that corresponds to it as a tensor. We can enter images into the model in the following ways:

model["Input"].assign (image)

When we want to see the activation values for a particular network layer, you can do the following:

Sess.run (model["Conv4_2"])

The conv4_2 is the corresponding tensor. 2-neural Style Transfer

The process of building a style conversion algorithm is as follows: Create content cost function jcontent (c,g) J C o n t e N T (C, G) j_{content} (c,g) Create the style cost function Jstyle (s,g) J S T y l e (S, G) J_{style} (S,G) joint creation of the global cost function J (G) =αjcontent (c,g) +βjstyle (s,g) J (G) =αj C o n t e N T ( C, G) +βj s T y l e (S, G) J (g) = \alpha j_{content} (c,g) + \beta J_{style} (s,g). 2-1-Computing the content cost

For content image C, show can be viewed in the following ways:

Content_image = Scipy.misc.imread ("images/louvre.jpg")
imshow (content_image)

For the choice of the number of layers, we generally do not take too much or too small. Too many layers, the extraction of more advanced features, in the content of the similarity, in the visual effect is not good, the number of layers is too small, the characteristics of extraction is too low, also not. This allows you to set the number of different network layers and then observe the specific results.

Suppose we select the network for the L l l layer for analysis, image C is input to the pre-trained Vgg network and forward propagation. A (C

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.