Pca
PCA, which is called principal component analysis, is a common method of dimensionality reduction.
PCA re-combines many of the original indicators with certain correlations into a new set of unrelated comprehensive indicators to replace all the original indicators. The n-dimensional features are mapped to the new orthogonal features of K-dimension.
There are two general implementations of PCA: Eigenvalue decomposition and SVD.
Principle
In order to find a set of orthogonal axes in the original space, first find the first axis (linear combination of data characteristics) F1, so that the variance of the data sample on the axis reaches the maximum, F1 characterize the first principal component information; next find the second axis, The second axis is orthogonal to the first axis (indicating that the information in the first principal component is no longer used) and that the sample variance is the largest in that direction, and so on ... This will find n axes. Because the variance on the axis found later is small, you can approximate the space by taking the previous R-axis. Finally achieve the effect of dimensionality reduction.
Implementation method
1. Eigenvalue decomposition (based on courseware from Peking University Li GE Teacher)
Resources
Http://www.cnblogs.com/LeftNotEasy/archive/2011/01/19/svd-and-applications.html
Summary of PCA and SVD