Deep learning Note one: Generate a confrontation network (generative adversarial Nets)

Source: Internet
Author: User
Tags nets generative adversarial networks

Article Link: http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf

This is Goodfellow's Scholar homepage, you can go to worship. Https://scholar.google.ca/citations?user=iYN86KEAAAAJ

Recent recommended articles related to Gan:

Unsupervised and semi-supervised Learning with categorical generative adversarial

Semi-supervised Learning with generative adversarial Networks

semi-supervised knowledge Transfer for Deep Learning from Private training Data


In this article, the author proposes a new generation model calculation process, which can avoid some of the difficulties faced by some generation models. The proposed generation countermeasure network trains two models at the same time: generating model G and discriminant model D. The goal of training for G is to make d the most likely to err. The training goal for D is to make it possible to determine whether the form is derived from the model or the real data. A little bit around, the author made a metaphor to help understand. Generate Model G There is an adversary, the discriminant model D, that can be likened to a counterfeit currency team, who try to produce counterfeit currencies and use them without being spotted by the police, while D can be likened to the police, who have to identify these counterfeit currencies. The final competition results until the forger (G) made the police (D) cannot distinguish between true and false currency. When both G and D are multilayer perceptron, it is called a confrontation network (adversarial nets).



Figure 1 Training Process Understanding



Figure I is a popular understanding of the training process against the network. The blue dotted line in the figure is the discriminant D, the green implementation is generated model G distribution P (g), the black dotted line is the real data generation distribution PX, the lower two horizontal lines represent x=g (z), which maps the noise to X's trend. From (a) to (d) you can see that with the training iterations, the distribution of G is getting closer to the real data, and D is finally unable to differentiate between G and real data, and it becomes a fixed value of 0.5. Training objectives are:


The first is the training goal of D, maximizing the discriminant probability of the correct classification, and the second is minimizing the training target of G. In practice, G training is maximized, because the original one is easily saturated at an early stage of learning and can be replaced with a stronger gradient.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.