Wunda deeplearning Image Style conversion

Source: Internet
Author: User
Tags abs

Wunda deeplearning Image style conversion, image style conversion data image style conversion data Deep learning & art:neural style Transfer

Welcome to the second assignment of this week. In this assignment, you'll learn about neural Style Transfer. This algorithm is created by Gatys et al. (https://arxiv.org/abs/1508.06576).

in this assignment, you'll:
-Implement the neural style transfer algorithm
-Generate novel artistic images using your algorithm

Most of the algorithms you ' ve studied optimize a cost function to get a set of parameter values. In neural Style Transfer, you'll optimize a cost function to get pixel values!

Import OS
import sys
import scipy.io
import scipy.misc
import Matplotlib.pyplot as Plt
from Matplotlib.pyplot import imshow from
PIL import Image from
nst_utils import *
import NumPy as NP
import t Ensorflow as TF

%matplotlib inline
1-problem Statement

Neural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges-images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (g). The generated image G combines the "content" of the image C with the "style" of the image S.

In this example, you is going to generate an image of the Louvre Museum in Paris (content image C), mixed with a painting By Claude Monet, a leader of the Impressionist movement (style image S).

Let's see how you can do this. 2-transfer Learning

Neural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of. The idea of using a network trained in a different task and applying it to a new task is called transfer learning.

Following the original NST paper (https://arxiv.org/abs/1508.06576), we'll use the Vgg network. Specifically, we'll use VGG-19, a 19-layer version of the Vgg network. This model have already been trained on the very large ImageNet database, and thus have learned to recognize a variety of lo W level features, at the earlier layers, and high level features (at the deeper layers).

Run the following code to load parameters from the VGG model. This could take a few seconds.

Model = Load_vgg_model ("Pretrained-model/imagenet-vgg-verydeep-19.mat")
print (model)
{' Conv3_2 ': <TF. Tensor ' relu_5:0 ' shape= (1,-----) dtype=float32>, ' conv1_2 ': <TF. Tensor ' relu_1:0 ' shape= (1, +, +) Dtype=float32>, ' avgpool1 ': <TF. Tensor ' avgpool:0 ' shape= (1, N, a, a) dtype=float32>, ' conv4_2 ': <TF. Tensor ' relu_9:0 ' shape= (1, N, +) Dtype=float32>, ' avgpool2 ': <TF. Tensor ' avgpool_1:0 ' shape= (1,----) dtype=float32>, ' Conv5_1 ': <TF. Tensor ' relu_12:0 ' shape= (1, N, +) Dtype=float32>, ' Conv5_4 ': <TF. Tensor ' relu_15:0 ' shape= (1, N, +) Dtype=float32>, ' avgpool3 ': <TF. Tensor ' avgpool_2:0 ' shape= (1, N, +) Dtype=float32>, ' conv4_4 ': <TF. Tensor ' relu_11:0 ' shape= (1, N, +) Dtype=float32>, ' Conv3_4 ': <TF. Tensor ' relu_7:0 ' shape= (1,-----) dtype=float32>, ' Conv5_3 ': <TF. Tensor ' relu_14:0 ' shape= (1, N, +) Dtype=float32>, ' conv1_1 ': <TF. Tensor ' relu:0 ' shape= (1, +, +) Dtype=float32>, ' Conv3_1 ': <TF. Tensor ' relu_4:0 ' shape= (1, Dtype=float32&gt, ' input ': <TF. Variable ' variable:0 ' shape= (1, 3) dtype=float32_ref>, ' conv3_3 ': <TF. Tensor ' relu_6:0 ' shape= (1,-----) dtype=float32>, ' Conv5_2 ': <TF. Tensor ' relu_13:0 ' shape= (1, N, +) Dtype=float32>, ' conv4_1 ': <TF. Tensor ' relu_8:0 ' shape= (1, N, +) Dtype=float32>, ' Conv4_3 ': <TF. Tensor ' relu_10:0 ' shape= (1, N, +) Dtype=float32>, ' conv2_1 ': <TF. Tensor ' relu_2:0 ' shape= (1,------) dtype=float32>, ' avgpool5 ': <TF. Tensor ' avgpool_4:0 ' shape= (1, ten, +) Dtype=float32>, ' conv2_2 ': <TF. Tensor ' relu_3:0 ' shape= (1,------) dtype=float32>, ' Avgpool4 ': <TF.
 Tensor ' avgpool_3:0 ' shape= (1, +, +) Dtype=float32>}

The model is stored in a python dictionary where each variable name are the key and the corresponding value is a tensor con taining that variable ' s value. To run a image through this network and you just has the to feed the image to the model. In TensorFlow, you can do so using the Tf.assign function. In particular, you'll use the Assign function as this:

model["Input"].assign (image)

This assigns the image as a input to the model. After this, if you want to access the activations of a particular layer, say layer 4_2 when the network is run on this IMA GE, you would run a TensorFlow session on the correct tensor conv4_2, as follows:

Sess.run (model["Conv4_2"])
3-neural Style Transfer

We'll build the NST algorithm in three steps:build the content cost function jcontent (c,g) j_{content} (C,G) build the S Tyle cost function Jstyle (s,g) J_{style} (S,G) Put it together to get J (G) =αjcontent (c,g) +βjstyle (s,g) J (g) = \alpha J_{con Tent} (c,g) + \beta J_{style} (s,g). 3.1-computing the content cost

In our running example, the content image C'll be is the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.

Content_image = Scipy.misc.imread ("images/louvre.jpg")
imshow (content_image)
<matplotlib.image.axesimage at 0x181d5ec6d8>

The content image (C) shows the Louvre museum ' s pyramid surrounded by old Paris buildings, against a sunny sky with a few Clouds.

* 3.1.1-how do ensure the generated image G matches the content of the image c?*

As we saw in lecture, the earlier (shallower) layers of a convnet tend to detect lower-level features such as edges and Si Mple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as Object classes.

We would like the "generated" image G to has similar content as the input image C. Suppose you have chosen some layer ' s a Ctivations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network–neither too Shallow nor too deep. (After you has finished this exercise, feel-come back and experiment with using different layers Results vary.)

So, suppose you has picked one particular hidden layer to use. Now, set the image C as the input to the pretrained Vgg network, and run forward propagation. Let A (c) a^{(c)} is the hidden layer activations in the layer you had chosen. (In lecture, we had written this as A[l] (c) a^{[l "(c)}, but here we'll drop the superscript [l] [l] to simplify the Notati On.) This would be a nhxnwxnc n_h \times n_w \times n_c tensor. Repeat this process with the image G:set G as the input, and run forward progation. Let A (g) a^{(g)} is the corresponding hidden layer activation. We'll define as the content cost function as:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.