The principle of Gan generation against Neural network (i.)

Source: Internet
Author: User
Tags generative adversarial networks

1. Rationale (illustrated here as an example of creating a picture)

Suppose there are 2 meshes, G (Generator) and D (discriminator), the functions are:

G: The mesh that generates the image receives a random noise Z, generating a picture from this noise, which is recorded as G (Z);

D: discriminant grid, to determine whether a picture is "real"

Its input parameter is x,x represents a picture, output D (x) represents the probability of x real picture;

If 1, the 100% is the real picture, if 0, the representative can not be the real picture.


2. During the training process, * * The goal of generating network g is * * Try to generate real images to deceive discriminant grid D;

The goal of **d is to try to separate G-generated images from real pictures.


Thus, G and D constitute a dynamic "game process".


3. The most ideal result: g can generate enough "real" picture g (z); D satisfied to determine whether the image generated by G is true. D (G (Z)) = 0.5. So our goal is achieved: to get a generated model of G, can be used to generate images.


4. Mathematical formula: See Arxiv.org/abs/1406.2661.gan first Paper:lan Goodfellow generative adversarial Networks


5. Algorithm: Using random gradient descent method to train d,g. Specifically also in the above article.


6.DCGAN Principle Introduction:

The best model for image processing applications in deep learning is CNN, how CNN and Gan combine. The answer is Dcgan.

The principle is the same as Gan. Just replaced the above G and D with two convolutional neural networks CNN. But it is not a direct change, it has made some changes to the structure of convolutional neural networks to improve the quality of samples and the speed of convergence. These changes are:

A. Cancel the pooling layer. The G network uses the transpose convolution for sampling, and the D network replaces pooling with the stride convolution.

B. Use batch normalization in D,g

C. Remove the FC layer and turn the network into an all-convolution network

D.G network using Relu as the activation function, the last layer of the hand that with Tanh

Use Leakyrelu as activation function in E.D network

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.