Recently, Yang Daniel sparse expression of the code of the paper, found that a lot of operations in the matrix of the column normalization, here to talk about the realization of the normalization of the column, and its benefits.
The Matrix's column normalization is to divide the values of each column of the matrix by the absolute value of the sum of the squares of all the elements in each column, and the result is that the sum of the squares of each column element of the matrix is 1.
For example, the matrix [three-to-three] ', the result of normalization is [0.2673,0.5345,0.8018]. The sum of the squares is 1.
Yang in the code, the sum of those squares is 0, and the square and very small column vector is eliminated, do not have to do training, so the last training sample matrix in each column is a training image block, the number of rows represents the size of the image block.
It was unclear why we had to do so much normalization until I thought of the Symmetry matrix (forgive me for the bad math, stumbling on the way to understanding).
Assuming that the sum of the squares of the x,x is 1, assuming that X is a matrix of 25*1000, then X ' is a 1000*25 matrix, and Yang Et's methods are used
A=x ' *x. So through the above changes, the sum of the squares of each column of x is 1, then the diagonal element of a is 1, and a is about diagonal symmetry. Then A is a symmetric matrix with a diagonal element of all 1, and the real symmetric matrix has the following properties:
This lays the groundwork for subsequent processing.