The "Xavier" initialization method is a very effective method of neural network initialization based on a 2010-year paper "Understanding the difficulty of training deep feedforward neural Networks ", unfortunately, until nearly two years, this method has gradually been more people's application and recognition.
In order to make the information flow in the network better, the variance of each layer output should be as equal as possible.
based on this goal, let's now deduce: which criteria should be met for each layer's weight.
The article assumes a linear activation function and satisfies the derivative of 0 points at 1, i.e.
Now let's start by analyzing a layer of convolution:
Where NI represents the number of inputs.
We have the following variance formula according to the probability statistic knowledge:
In particular, when we assume that both the input and the weights are 0 mean (which is easier to satisfy now that the BN is present), the above formula can be simplified to:
Further assuming that the input x and the weight w are independently distributed, there are:
Thus, in order to ensure that the input is consistent with the output variance , there should be:
For a multilayer network, the variance of a layer can be expressed in cumulative form:
In particular, the reverse propagation of a gradient is similar in form:
In conclusion, in order to ensure that the variance of each layer coincides with the forward propagation and the reverse propagation, the following should be satisfied:
However, the actual number of inputs and outputs are often unequal, so for the sake of balance, the weight variance should be met :
———————————————————————————————————————
———————————————————————————————————————
Those who have learned probability statistics know that the variance of the uniform distribution between [A, b] is:
So
XavierThe implementation of initialization is the following uniform distribution:
——————————————————————————————————————————
———————————————————————————————————————————
Let's take a look at how this is implemented in Caffe, where the code is in the Include/caffe/filler.hpp file.
Template<typenameDtype>class Xavierfiller: PublicFiller<Dtype> {public:explicitXavierfiller(const fillerparameter& param):Filler<Dtype>(param){} virtual voidFill(blob<Dtype>* blob){CHECK(blob,Count()); int fan_in = Blob->count()/Blob->num(); int fan_out = Blob->count()/Blob->channels();Dtypen = fan_in; Default to Fan_in if(This,filler_param_. Variance_norm()==Fillerparameter_variancenorm_average) {n =(fan_in + fan_out)/Dtype(2); } else if(This,filler_param_. Variance_norm()==Fillerparameter_variancenorm_fan_out) {n = fan_out; }DtypeScale = sqrt(Dtype (3) / n); caffe_rng_uniform<Dtype>(blob,Count(),-scale, scale, Blob->mutable_cpu_data());check_eq(This,filler_param_. Sparse(),-1) << "sparsityNot supported by thisFiller."; }};
As can be seen above, Caffe's Xavier realization has three kinds of choice
(1) by default, the variance considers only the number of inputs:
(2) Fillerparameter_variancenorm_fan_out, the variance considers only the number of outputs:
(3) Fillerparameter_variancenorm_average, the variance considers both the input and the output number:
I think it's because the forward information is more important because the input is only considered by default.
--xavier initialization method of deep learning