convolution layer:
Ws=tf.get_variable (' W ', [5,5,3,16],initializer=tf.truncated_normal_initializer (stddev=0.1))
bs=tf.get_ Variable (' b ', [16],initializer=tf.constant_initializer (0.1))
conv=tf.nn.conv2d (input,ws,strides=[1,1,1,1], padding= ' SAME ')
b=tf.nn.bias_add (conv,bs)
Now_conv=tfnn.relu (b)
The tf.get_variable function has four parameters, the first dimension is the name, the second dimension is the variable dimension (the first two dimension is the convolution kernel dimension, the third dimension is the depth of the layer, the fourth dimension is the convolution kernel depth), and the third dimension is the initialization method of the variable, the following are mainly:
Tf.constant_initializer: Constant Initialization function
Tf.random_normal_initializer: Normal Distribution
Tf.truncated_normal_initializer: Intercept the normal distribution
Tf.random_uniform_initializer: Evenly distributed
Tf.zeros_initializer: It's all 0.
Tf.ones_initializer: It's all 1.
Tf.uniform_unit_scaling_initializer: A random value that satisfies the uniform distribution but does not affect the output order of magnitude
The tf.nn.conv2d function has four parameters, the first dimension is the input image, the second dimension is the convolution layer weight, the third dimension is the step of different dimensions (in CNN, the first and fourth dimensions are fixed to 1), and the fourth dimension is filled (same represents full 0 padding, valid is not filled).
Depth of the layer: for example, an input image is 28*28*3, where 3 represents the depth, that is (R,G,B)
The depth of the convolution nucleus: My understanding is the number of convolution nuclei (don't know right)
Of course, we have a simpler way to implement a convolution layer, and the code is as follows:
Now_conv=slim.conv2d (input,16,[3,3])
The SLIM.CONV2D function has three required parameters, the first dimension is the input image, the second dimension is the depth of the convolution kernel, and the third dimension is the dimension of the convolution nucleus.
Pool layer (sample layer):
P=tf.nn.max_pool (now_conv,ksize=[1,3,3,1],strides=[1,2,2,1],padding= ' SAME ')
TensorFlow provides two kinds of pooling functions, namely, Max_pool and Avg_pool, the former is the maximum pool, the latter is the mean value pool.