The finishing of Gan thesis

Source: Internet
Author: User
Tags generator nets generative adversarial networks
original Gan

Goodfellow and Bengio and other people published in Nips 2014 years of article generative adversary Network, is the creation of a confrontation network of the creation of the article, the thesis thought inspired by the game theory of two people zero-sum game. In the two-person zero-sum game, the interest of the two-bit game is zero or a constant, that is, the other party has the gain, the other side will lose. The two-bit players in the GAN model are composed of the generative model (generative models) and the discriminant models (discriminative model) respectively. Generate model G to capture the distribution of sample data, discriminant Model D is a two classifier, estimating the probability that a sample is derived from training data rather than generating data. G and D are generally nonlinear mapping functions, such as multilayer perceptron, convolution neural network and so on.

As shown in the picture, the left is a discriminant model, when input training data x, expect output high probability (close to 1); the lower half of the right figure is a model, and the input is a random noise z that obeys a simple distribution (for example, a Gaussian distribution), and the output is a generated image of the same size as the training image. To the discriminant model D input to generate samples, for d the expected output low probability (judged to generate samples), for the generation of model G to try to deceive D, so that the discriminant model output high probability (misjudged as a real sample), thus creating competition and confrontation.
Gan.png

Gan has a lot of advantages: according to the actual results, it looks like a better sample; Gan can train any kind of generator network; Gan does not need to design models that follow any kind of factorization, and any generator networks and any discriminator are useful; Gan does not have to use Markov chains for repeated sampling, It is not necessary to infer in the learning process and avoid the problem of approximate calculation of the tricky probability.

Gan mainly exists in the following problems: The network is difficult to converge, at present all the theory that Gan should have a good performance in Nash equilibrium, but gradient drop only in the case of convex function to ensure the realization of Nash equilibrium. gan Development

On the one hand, the development of Gan is very fast, here is just a brief and roughly divided into several categories of related papers, welcome feedback, continuous update. In addition to the recent ICLR 2017 in the open Review, you can pay attention to the ICLR 2017 conference Track, there are corresponding paper notes sharing ICLR 2017 | Gan Missing Modes and Gan

Gan from 2014 to now develops very quickly, especially recently iclr 2016/2017 about Gan's paper many, gan now has many problems still to solve, the potential is very big. In general, the existing Gans papers can be divided into the following categories of Gan theory gan in semi-supervised muti-gan gan with the other generative model Gan and RNN gan in Applic ation GAN theory

This kind of concern is related to the principle of unsupervised Gan: Comparing the two distribution distances; some methods of DL are used to quickly converge the GAN and so on. Related papers are:GAN: Goodfellow, Ian, et al. "Generative adversarial nets." Advances in neural information processing Systems. 2014.Lapgan: Denton, Emily L., Soumith Chintala, and Rob Fergus. "Deep generative Image Models using a Laplacian pyramid of adversarial Networks." Advances in neural information processing systems. 2015.Dcgan: Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative." Adversarial networks ARXIV .06434 (2015).Improved GAN: Salimans, Tim, et al. "Improved techniques for training Gans." ArXiv preprint arxiv:1606.03498 (2016).Infogan: Chen, Xi, et al. "Infogan:interpretable Representation learning by information maximizing generative adversarial." ArXiv preprint arxiv:1606.03657 (2016). * *Energygan: Zhao, Junbo, Michael Mathieu, and Yann LeCun. "Energy-based generative adversarial network." ArXiv preprint (2016). Creswell, Antonia, and Anil A. Bharath. "Task specific adversarial cost Function." ArXiv preprint arxiv:1609.08661 (2016).F-gan: Nowozin, Sebastian, Botond Cseke, and Ryota Tomioka. "F-gan:training generative neural samplers using variational divergence." Minimization ArXiv preprint (2 016). unrolled generative adversarial Networks, ICLR 2017 Open Review improving generative adversarial with Networks Sing Feature Matching, iclr 2017 open Review Mode regularized generative adversarial Networks, iclr 2017 open Review B-gan : Unified Framework of Generative adversarial Networks, iclr 2017 Open Review Mohamed, Shakir, and Balaji Lakshminarayanan . "The Learning in implicit generative Models." ArXiv preprint arxiv:1610.03483 (2016).GAN in semi-supervised

This kind of research will be used for semi-supervised learning, related papers are: Springenberg, Jost Tobias. "Unsupervised and semi-supervised Learning with categorical generative adversarial." Networks ArXiv preprint 6390 (2015). Odena, Augustus. "Semi-supervised Learning with generative adversarial Networks." ArXiv preprint arxiv:1606.01583 (2016). Muti-gan

This type of study combines several Gan, related papers: Coupledgan: Liu, Ming-yu, and Oncel Tuzel. "Coupled generative adversarial Networks." ArXiv preprint (2016). Wang, Xiaolong, and Abhinav Gupta. "Generative Image modeling using Style and Structure adversarial Networks." ArXiv preprint arxiv:1603.05631 (2016). Generative adversarial parallelization iclr 2017 Open Review lr-gan:layered recursive generative Adversarial Networks fo R Image Generation, iclr 2017 Open Review GAN with other generative model

This kind of research combines GAN with other generative models, and the related papers are: Dosovitskiy, Alexey, and Thomas Brox. "Generating images with perceptual similarity metrics based on deep networks."  ARXIV preprint arxiv:1602.02644 (2016). Larsen, Anders Boesen lindbo, Søren Kaae, and Ole Sønderby. "Autoencoding beyond pixels using a learned similarity metric."  ARXIV preprint arxiv:1512.09300  (2015). Theis, Lucas, and Matthias bethge. "Generative image modeling using spatial Lstms."  advances in Neural information processing Systems. 2015. GAN with RNN

Such studies combine GAN with RNN (also to refer to pixel RNN), related papers are: Im, Daniel Jiwoong, et al. generating images with recurrent adversarial. " ARXIV preprint arxiv:1602.05110  (2016). Kwak, Hanock, and Byoung-tak Zhang. "Generating Images part by part with Composite generative adversarial Networks."  ARXIV preprint arxiv:1607.05387  (2016). Yu, Lantao, et al. "Seqgan:sequence generative adversarial Nets with Policy gradient."  ARXIV preprint arxiv:1609.05473  (2016). GAN in Application

Such studies have applied the actual use of Gan (excluding image generation) and related papers: Zhu, Jun-yan, et al. "Generative visual manipulation on the natural image manifold."  european Conference on Computer Vision. Springer International Publishing, 2016. Creswell, Antonia, and Anil Anthony Bharath. "Adversarial training for Sketch retrieval."  european Conference on Computer Vision. Springer International Publishing, 2016. Reed, Scott, et al. "Generative adversarial text to image synthesis."  ARXIV preprint arxiv:1605.05396  (2016). Ravanbakhsh, Siamak, et al. "Enabling Dark Energy Science with Deep generative Models of Galaxy Images."  ARXIV preprint arxiv:1609.05796 (2016). Abadi, Martín, and David G. Andersen. "Learning to Protect Communications with adversarial neural cryptography."  ARXIV preprint arxiv:1610.06918 (2016). Odena, Augustus, Christopher Olah, and Jonathon Shlens. "Conditional Image synthesis with auxiliary classifier Gans."  ARXIV preprint arxiv:1610.09585  (2016). Ledig, Christian, et al. "PhoTo-realistic single Image super-resolution Using a generative adversarial network. "  ARXIV preprint arxiv:1609.04802  (2016). Nguyen, Anh, et al. "Synthesizing the preferred inputs for neurons in neural networks via deep generator."  ARXIV preprint arxiv:1605.09304 (2016).


Original address: http://www.jianshu.com/p/2acb804dd811

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.