1.Dropout principle Brief:
Dropout is randomly throwing away a subset of neurons during different training sessions. That is, the activation value of a certain neuron with a certain probability p, let it stop working, this training process does not update the weights, also do not participate in the calculation of neural networks. But its weight has to be preserved (just temporarily not updated) because it may have to work again the next time the sample is entered. As follows:
But in testing and validation: each neuron participates in the operation, but its output is multiplied by the probability p.
2, Tf.nn.dropout (x,keep_prob,noise_shape=none,seed=none,name=none) function description
The previous two parameters are commonly used in the above methods:
The first parameter x: refers to the input
The second parameter keep_prob: sets the probability that the neuron is selected, Keep_prob is a placeholder at initialization, Keep_prob = Tf.placeholder (Tf.float32). TensorFlow set Keep_prob specific values at run, such as keep_prob:0.5
The third parameter, Noise_shape: A 1-dimensional int32 tensor, represents a shape that randomly generates a "hold/Discard" flag
Seed: Shaping variable, random number seed
Name: no practical use.
Summary: Dropout () function is to make some elements in the tensor into 0, the other unchanged 0 of the elements into the original 1/keep_prob size.
Tf.nn.dropout () Introduction