Generation countermeasure network based on Variational self-encoder (VEGAN)
https://arxiv.org/abs/1512.09300
Motivation
The previous article, based on the self-encoder of Gan, has been mentioned in Aegan, there is a problem, that is, if our results are too true, then relatively, diversity will be missing, so this article is mainly to use VAE to solve the problem of diversity in the original Aegan. VAE
For VAE, refer to the previous post Vae
We can think of it as a sampling process, except that the sample is from the distribution we specify Vaegan
I did not find the Vaegan figure, so I used the original instead. The z here comes from a specified distribution, and the new image is generated by this z, unlike Aegan, which is the distribution we specify, whereas in Aegan the sample distribution is the distribution. Therefore, we can say that the acquisition of Z, can be better close to the original data. Or, to put it another way, we have a set of distributions that are generated by a series of changes closer to the real sample. Rather than coding the feature so that the coverage is larger and the result is better