TensorFlow implementation of Gan generation against network _tensorflow

Source: Internet
Author: User
Tags generator generator generator

Recently in reviewing the knowledge learned in the past, the generation of confrontation network is one of the main research direction, today suddenly saw a long time ago a code, take out to review the generation of basic ideas against the network, but also to the students want to study this direction a little reference.

The generation against the network is essentially the generator and the discriminant against each other. Here's the code for the discriminant.

Def discriminator (Images, Reuse=none): With Tf.variable_scope (Tf.get_variable_scope (), reuse=reuse) as scope:
        # convolution + activation + Pool D_W1 = tf.get_variable (' d_w1 ', [5,5,1,32],initializer=tf.truncated_normal_initializer (stddev=0.02)) D_B1 = tf.get_variable (' d_b1 ', [32],initializer=tf.constant_initializer (0)) D1 = tf.nn.conv2d (input=images, filter=d_w1,strides=[1,1,1,1],padding= ' SAME ') d1 = d1 + d_b1 d1 = tf.nn.relu (d1) d1 = Tf.nn.avg_po OL (d1,ksize=[1,2,2,1],strides=[1,2,2,1],padding= ' SAME ') # convolution + activation + Pool D_W2 = tf.get_variable (' d_w2 ', [5,5 , 32,64],initializer=tf.truncated_normal_initializer (stddev=0.02)) d_b2 = tf.get_variable (' d_b2 ', [64],initializer= Tf.constant_initializer (0)) D2 = tf.nn.conv2d (input=d1,filter=d_w2,strides=[1,1,1,1],padding= ' SAME ') D2 =

        D2 + d_b2 D2 = Tf.nn.relu (d2) D2 = Tf.nn.avg_pool (d2,ksize=[1,2,2,1],strides=[1,2,2,1],padding= ' SAME ')
        # Full Connection + activationD_W3 = tf.get_variable (' d_w3 ', [7 * 7 * 64,1024],initializer=tf.truncated_normal_initializer (stddev=0.02)) D_b3 = t F.get_variable (' d_b3 ', [1024],initializer=tf.constant_initializer (0)) D3 = Tf.reshape (d2,[-1,7 * 7 *)) d 3 = Tf.matmul (D3,D_W3) d3 = d3 + d_b3 d3 = Tf.nn.relu (d3) # Full Connection d_w4 = tf.get_variable (' d _w4 ', [1024,1],initializer=tf.truncated_normal_initializer (stddev=0.02)) D_b4 = tf.get_variable (' d_b4 ', [1],initial Izer=tf.constant_initializer (0)) D4 = Tf.matmul (d3,d_w4) + D_B4 # finally outputting a value that is not scaled return D4
The discriminant that is written in the code consists of two convolution layers and two full join layers, the activation function uses the Relu function, which is implemented by Tf.nn.relu, and the pooling method used is mean pooling, not maximum pooling, implemented by Tf.nn.avg_pool.

But also notice the difference between the Tf.truncated_normal_initializer function and the Tf.random_normal function.

def generator (z, Batch_size, Z_dim, Reuse=false): ' Receive eigenvector Z, generate picture by Z ' with Tf.variable_scope (Tf.get_variable_sco PE (), reuse=reuse): # Full Join + batch regularization + Activation # Z_dim-> 3136-> 56*56*1 g_w1 = tf.get_variable (' G_w1
        ', [Z_dim, 3136], Dtype=tf.float32, Initializer=tf.truncated_normal_initializer (stddev=0.02)) G_B1 = tf.get_variable (' g_b1 ', [3136], Initializer=tf.truncated_normal_initializer (stddev=0.02)) G1 = tf.m Atmul (z, G_W1) + g_b1 G1 = Tf.reshape (G1, [-1,, 1]) G1 = Tf.contrib.layers.batch_norm (G1, Epsilon=1  E-5, scope= ' bn1 ') G1 = Tf.nn.relu (G1) # convolution + batch regularization + activation g_w2 = tf.get_variable (' g_w2 ', [3,3,1,z_dim  /2],dtype=tf.float32, Initializer=tf.truncated_normal_initializer (stddev=0.02)) g_b2 = tf.get_variable (' g_b2 ', [Z_dim/2],initializer=tf.truncated_normal_initializer (stddev=0.02)) g2 = tf.nn.conv2d ( g1,g_w2,strides=[1,2,2,1],padding= ' SAME ') g2 = g2 + g_b2 g2 = tf.contrib.layers.batch_norm (g2,epsilon=1e-5,scope= ' bn2 ') G2 = Tf.nn.relu (g2) g2 = Tf.image.resize_images (g2,[56,56]) # convolution + batch regularization + activation g_w3 = tf.get_variable (' g_w3 ', [3,3,z_dim/2,z_dim/4],dtype=tf.float32, Initializer=tf.truncated_normal_initialize R (stddev=0.02)) g_b3 = tf.get_variable (' g_b3 ', [Z_dim/4],initializer=tf.truncated_normal_initializer (stddev=0.02) ) G3 = tf.nn.conv2d (g2,g_w3,strides=[1,2,2,1],padding= ' SAME ') g3 = g3 + g_b3 g3 = Tf.contrib.layer

        S.batch_norm (g3,epsilon=1e-5,scope= ' bn3 ') g3 = Tf.nn.relu (g3) g3 = Tf.image.resize_images (g3,[56,56)) # convolution + Activate G_W4 = tf.get_variable (' G_w4 ', [1,1,z_dim/4,1],dtype=tf.float32, Initi Alizer=tf.truncated_normal_initializer (stddev=0.02)) G_b4 = tf.get_variable (' g_b4 ', [1],initializer=tf.truncated_n Ormal_initializer (stddev=0.02)) G4 = tf.nn.conv2d (g3,g_w4,strides=[1,2,2,1],padding= ' SAME ') G4 = g4 + g_b4 G4 = tf.sigmoid ( G4) # Output G4 dimension: Batch_size x x 1 return G4
Next is the generator generator, the activation function of the middle layer is the Relu function, and the final output uses the sigmoid function

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.