the necessity of dimensionality reduction
1. Multi-collinearity--predictor variables are interconnected. Multiple collinearity causes instability in the solution space, which can lead to incoherent results.
2. The high-dimensional space itself is sparse. One-dimensional normal distribution has a value of 68% falling between the positive and negative standard deviations, and only 0.02% in 10-dimensional space.
3. Too many variables can hinder the establishment of a search rule.
4. Analysis at the variable level alone may omit potential links between variables. For example, several predictor variables may fall within a group that reflects only the characteristics of a particular aspect of the data.
the principle of dimensionality reduction
The so-called dimensionality reduction is the projection of the original data into a new low-dimensional space. The coordinate direction of the low dimensional space is transformed from the original feature space with the principle of "variance maximization".
PCA does not consider the label of the data;
LDA considers the label of the data, which is "the smallest variance in the class after projection, the most variance between classes".
Compare PCA and LDA (there are ready-made kits in Sklearn) Here is an example of LDA better than PCA