50 lines of code to implement the network Gan_gan of confrontation generation

Source: Internet
Author: User
Tags generator rounds pytorch generative adversarial networks neural net

Turn the arrogant cow dev Nag. Dev Nag is a former Google senior engineer, AI start-up Wavefront founder and CTO, this article describes how he used less than 50 lines of code, on the Pytorch platform to complete the training of GAN.

In 2014, Ian Goodfellow and he colleagues at the University of Montreal published a stunning paper introducing the world To Gans, or generative adversarial networks. Through an innovative combination of computational graphs and game theory they showed this, given enough modeling power, t WO models fighting against each of the other would is able to co-train through the old plain.

The models play two distinct (literally, adversarial) roles. Given some real data set R, G are the generator, trying to create fake data this looks just like the genuine data, while D Is the discriminator, getting data from either to the real set or G and labeling the difference. Goodfellow ' s metaphor (and a fine one it is) 's that G. A team of forgers trying to match real paintings with the IR output, while D is the team's detectives trying to the difference. (Except, the forgers G never get to the original data-only the judgments of D. They ' re like Blind forgers.)

In the ideal case, both D and G would get better over time until G had essentially become a "master forger" of the genuine Article and D was in a loss, "unable to differentiate between the two distributions."

In practice, what Goodfellow had shown is that G would is able to perform a form of unsupervised on the learning DataSet, finding some way of representing that data in a (possibly) is much lower-dimensional manner. And as Yann LeCun famously stated, unsupervised learning is the "cake" of true AI.

This is powerful technique seems like it must require a metric ton of the code just to get started, right? Nope. Using Pytorch, we can actually create a very simple GAN in under lines of code. There are really only 5 components to the ABOUT:R: the original, genuine data set i:the random-noise that goes into th E Generator as a source of Entropy g:the generator which tries to copy/mimic the original data set D:the discriminator W Hich tries to tell apart G ' s output from R the actual ' training ' loop where we teach G to trick D and D to beware G.

1. R:in our case, we'll start with the simplest possible r-a bell curve. This function takes a mean and a standard deviation and returns a function which provides the right shape of sample data F Rom a Gaussian with those parameters. In our sample code, we'll use a mean of 4.0 and a standard deviation of 1.25.

2.) i:the input into the generator are also random, but to make our jobs a little bit harder, let's use a uniform distribut Ion rather than a normal one. This means so we model G can ' t simply shift/scale the input to copy R, but has to reshape the data in a non-linear way.

3.) G:the generator is a standard feedforward graph-two hidden layers, three linear. We ' re using a Elu (exponential linear unit) because they ' re the New Black, yo. The uniformly distributed data samples from I and somehow mimic the normally distributed the samples from R.

4.) D:the discriminator Code is very similar to G ' s generator code; A feedforward graph with two hidden layers and three linear. It's going to get the samples from either R or G and would output a single scalar between 0 and 1, interpreted as ' fake ' vs. ' r EAL '. This is about as milquetoast as a neural net can get.

5.) Finally, the training loop alternates between two Modes:first training D on real data vs. fake data, with accurate LA BELs (Police Academy); And then training G to fool D, with inaccurate labels (this are more like those preparation the from montages ' s Ocean). It ' s a fight between good and evil, people.

Even if you are haven ' t seen Pytorch before, you can probably tell what ' s going on. In the "I" section, we push both types of data through D and apply a differentiable criterion to D ' s guesses vs. The actual labels. That pushing was the ' forward ' step; We then call ' backward () ' explicitly in order to calculate gradients, which are then, used to update D ' s parameters in the D_optimizer step () call. G is used but isn ' t trained here.

Then in the "Last" (red) section, we did the same thing for G-note so we also run G ' s output through D (we ' re essentially Giving the forger a detective to practice on) but we don't optimize or change D in this step. We don ' t want the detective D to learn the wrong labels. Hence, we call G_optimizer.step ().

And...that ' s all. There ' s Some other boilerplate code but the gan-specific stuff are just those 5 components, nothing else.

After the a few thousand rounds of this forbidden dance between D and G, what do we? The discriminator D gets good very quickly (while G slowly moves up), but once it gets to a certain level of power, G has A worthy adversary and begins to improve. Really improve.

Over 20,000 training rounds, the mean of G ' s output overshoots 4.0 But the then comes back in a fairly stable, correct range ( left). Likewise, the standard deviation initially drops in the wrong direction but then-rises up to the desired 1.25 range ), matching R.

Ok, so the basic stats match R, eventually. How about the higher moments? Does the shape of the distribution look right? Could certainly have a uniform distribution with a mean of 4.0 and a standard deviation to 1.25, but that W Ouldn ' t really match R. Let's show the final distribution emitted by G.

Not bad. The left tail are a bit longer than the right, but the skew and kurtosis are, shall we say, evocative of the original Gauss Ian.

G recovers the original distribution R nearly Perfectly-and D is left cowering in the corner, mumbling to itself, unable To tell fact from fiction. This is precisely the behavior we want (for the Figure 1 in Goodfellow). From fewer than lines of code.

Goodfellow would go in to publish many the other papers on Gans, including a 2016 gem describing some, practical improvements, Including the Minibatch discrimination method adapted here. And here's a 2-hour tutorial he presented at NIPS 2016. For TensorFlow users, here's a parallel post from Aylien on Gans.

Ok. Enough talk. Go look at the code.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.