fromSklearnImportdecompositionImportNumPy as Npa1_mean= [1, 1]a1_cov= [[2,. 99], [1, 1]]A1= Np.random.multivariate_normal (A1_mean, A1_cov, 50) A2_mean= [5, 5]a2_cov= [[2,. 99], [1, 1]]A2= Np.random.multivariate_normal (A2_mean, A2_cov, 50) A= Np.vstack ((A1, A2))#a1:50*2;a2:50*2, Horizontal connectionB_mean= [5, 0] B_cov= [[. 5,-1], [-0.9,. 5]]b= Np.random.multivariate_normal (B_mean, B_cov, 100)ImportMatplotlib.pyplot as Pltplt.scatter (a[:,0],a[:,1],c='R', marker='o') Plt.scatter (b[:,0],b[:,1],c='g', marker='*') plt.show ()#stupid idea, merging A and B, and then one-dimensional can be dividedKPCA = decomposition. KERNELPCA (kernel='cosine', N_components=1) AB=Np.vstack ((A, B)) ab_transformed=kpca.fit_transform (AB) plt.scatter (ab_transformed,ab_transformed,c='b', marker='*') plt.show () KPCA= decomposition. KERNELPCA (N_components=1) AB=Np.vstack ((A, B)) ab_transformed=kpca.fit_transform (AB) plt.scatter (ab_transformed,ab_transformed,c='b', marker='*') plt.show ()
Note 1: The book says Consin PCA is better than the default linear PCA, is not consin PCA more compact, the data does not diverge.
Always do not understand when to use, when not
Fit (X, Y=none)
Fit the model from data in X.
Parametersx:array-like, Shape (N_samples, n_features):
Training Vector, where n_samples in the number of samples and n_features are the numberof features.
Fit_transform (X, Y=none, **params)
Fit the model from data in X and transform X.
Parametersx:array-like, Shape (N_samples, n_features):
Training Vector, where n_samples in the number of samples and n_features are the numberof features.
A brief introduction to Sklearn.decomposition.KernelPCA