Reference website: https://github.com/znxlwm/pytorch-generative-model-collections
Development history and paper:http://blog.csdn.net/u013369277/article/details/60954170 of Gan 1, cgan (Condition gan)
2, Acgan (auxiliary Classifier GAN)
3, GAN used in Semantic segmentation
4, semi-supervised GAN
Reference Address: https://blog.csdn.net/shenxiaolu1984/article/details/75736407
Z is random noise 100-D random vector, G is convolutional network, output 512*512*6 (free design size) size image x_fake. The discriminator is the segmentation model that we need finally, the Division category is K class, but the D output category has the K+1 class, the K+1 class is the false sample class. The discriminator D input has three types of data. 1, has the label sample X_label. 2, no label sample X_unlabel. 3, the generator generated false sample X_fake. Three samples correspond to three kinds of errors. three types of errors
The whole system involves three kinds of errors.
For a tagged sample of the training set, investigate whether the estimated label is correct. That is, the calculation is classified as the corresponding probability:
LLABEL=−E[LNP (y|x)]llabel=−e[ln P (y|x)]
For non-tagged samples in the training set, the investigation is estimated to be "true". That is, the calculation is not estimated as the probability of the K+1k+1 class:
Lunlabel=−e[ln (1−p (k+1|x))]lunlabel=−e[ln (1−p (k+1|x))]
For pseudo-samples generated by the generator, the investigation is estimated as "pseudo". That is, the probability of estimating the K+1k+1 class is calculated:
LFAKE=−E[LNP (k+1|x)] training g
Minimize-e[ln (1-p (k+1 | D (G (z)))]. The x_fake generated by G is considered a non-k+1 class by D. Training D
Minimizes three errors.