Article Link: http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
This is Goodfellow's Scholar homepage, you can go to worship. Https://scholar.google.ca/citations?user=iYN86KEAAAAJ
Recent recommended articles related to Gan:
Unsupervised and semi-supervised Learning with categorical generative adversarial
Semi-supervised Learning with generative adversarial Networks
semi-supervised knowledge Transfer for Deep Learning from Private training Data
In this article, the author proposes a new generation model calculation process, which can avoid some of the difficulties faced by some generation models. The proposed generation countermeasure network trains two models at the same time: generating model G and discriminant model D. The goal of training for G is to make d the most likely to err. The training goal for D is to make it possible to determine whether the form is derived from the model or the real data. A little bit around, the author made a metaphor to help understand. Generate Model G There is an adversary, the discriminant model D, that can be likened to a counterfeit currency team, who try to produce counterfeit currencies and use them without being spotted by the police, while D can be likened to the police, who have to identify these counterfeit currencies. The final competition results until the forger (G) made the police (D) cannot distinguish between true and false currency. When both G and D are multilayer perceptron, it is called a confrontation network (adversarial nets).
Figure 1 Training Process Understanding
Figure I is a popular understanding of the training process against the network. The blue dotted line in the figure is the discriminant D, the green implementation is generated model G distribution P (g), the black dotted line is the real data generation distribution PX, the lower two horizontal lines represent x=g (z), which maps the noise to X's trend. From (a) to (d) you can see that with the training iterations, the distribution of G is getting closer to the real data, and D is finally unable to differentiate between G and real data, and it becomes a fixed value of 0.5. Training objectives are:
The first is the training goal of D, maximizing the discriminant probability of the correct classification, and the second is minimizing the training target of G. In practice, G training is maximized, because the original one is easily saturated at an early stage of learning and can be replaced with a stronger gradient.