The specific function of normalization is to generalize the statistical distribution of uniform samples. Normalization between 0-1 is the statistical probability distribution, and normalization between -1--+1 is the statistical coordinate distribution. Normalization has the same meaning, unity and unity. Whether it's for modeling or computing, first, the basic unit of measurement to be the same, the neural network is a sample in the event of the statistical difference probability of training (probability calculation) and prediction, and the sigmoid function is the value of 0 to 1, the last node of the network output is the same, Therefore, the output of the sample should be normalized and processed frequently. Normalization is a statistical probability distribution between 0-1, and when all the input signals of all samples are positive, the weights associated with the first hidden layer neurons can only increase or decrease at the same time, resulting in slow learning. In addition, there are often singular sample data in the data, the network training time caused by the singular sample data is increased, and the network can not converge. In order to avoid this situation and the convenience of data processing, speed up the network learning speed, the input signal can be normalized, so that the mean value of all samples is close to 0 or smaller than the mean variance.
In MATLAB, there are three kinds of methods for normalization:
first, the use of MATLAB language programming, the commonly used functions have the following several:
1. Linear function conversion, the expression is as follows:
y= (X-minvalue)/(Maxvalue-minvalue) (from one to 0 1)
y=0.1+ (x-min)/(max-min) * (0.9-0.1) (from one to 0.1-0.9)
Description: X, y are the values before and after the conversion, MaxValue, MinValue, respectively, the maximum and minimum value of the sample.
2. Logarithmic function conversion, the expression is as follows:
Y=LOG10 (x)
Description: A 10-based logarithmic function conversion.
3. Inverse cotangent function conversion, the expression is as follows:
Y=atan (x) *2/pi
Second, premnmx, Tramnmx, Postmnmx, Mapminmax
The PREMNMX function is used to normalized the input data or output data of the network, and the normalized data will be distributed within the [ -1,1] interval.
The syntax format of the PREMNMX statement is: [Pn,minp,maxp,tn,mint,maxt]=premnmx (P,t), where p,t is the raw input and output data respectively.
If a normalized sample data is used in training the network, the new data used in future use of the network should also receive the same preprocessing as the sample data, which requires the TRAMNMX function:
The syntax format for TRAMNMX statements is: [Pn]=tramnmx (P,MINP,MAXP)
P and PN are the input data before and after the transformation respectively, and the MAXP and MINP are the maximum and minimum values found for the PREMNMX function respectively.
The output of the network needs to be reversed to restore the original data, commonly used functions are: Postmnmx.
The syntax format for POSTMNMX statements is: [PN] = Postmnmx (P,MINP,MAXP)
P and PN are the input data before and after the transformation respectively, and the MAXP and MINP are the maximum and minimum values found for the PREMNMX function respectively.
Another function is Mapminmax, which can return each row of the matrix to [-1 1].
The syntax format for the MAPMINMAX statement is: [Y1,ps] = Mapminmax (x1)
Where X1 is a matrix y1 that needs to be normalized is the result.
When you need to do a second set of data, you can do the same thing in the following ways.
y2 = Mapminmax (' Apply ', X2,ps)
When you need to restore the data that is returned, you can use the following command: X1_again = Mapminmax (' reverse ', y1,ps)
Iii. prestd, POSTSTD, TRASTD
The PRESTD is normalized to the unit variance and the 0 mean value.
PMINP and MAXP are the minimum and maximum values in P respectively. Mint and Maxt are the minimum and maximum values for T.