Machine learning (11) Spectral Clustering algorithm __ Machine learning

Source: Internet
Author: User

Spectral Clustering algorithm

Original address:http://blog.csdn.net/hjimce/article/details/45749757

Author : HJIMCE

an overview of the algorithm

The spectral clustering algorithm is based on the spectral graph theory, and compared with the traditional clustering algorithm, it has the advantages of clustering on arbitrary shape sample space and converging to the global optimal solution. There are many kinds of methods for solving spectral clustering, among which the nomarlized cut is more simple and commonly used. The algorithm flow is as follows:

1, using KNN search the nearest K-neighbor samples, then we construct the sample similarity sparse matrix W (n,n) (if you do not use KNN, then construct a full join graph, not a sparse matrix, if the sample is more, the solution speed will be very slow), the similarity between the two samples can be measured by the following formula:


This side first defines the diagonal element of W as 0 (wii=0), and then normalized each row and 1 of the W Matrix.

2. After normalized processing of the W matrix, the normalized Laplace matrix L (The diagonal element of the normalized Laplace matrix is 1, all elements of each row and 0), namely:

L=i-w

3, to solve the L matrix of the first K minimum eigenvalue of the corresponding eigenvector (k is the number of clustering), and then put the K-eigenvectors together, forming a new feature vector space data e (n,k) matrix. So e each row corresponds to each sample of the original data, then we do the K-means clustering of the N row data (also can use other clustering method), the result of clustering is the result of spectral clustering.

Second, the source code practice

#coding =utf-8 Import NumPy as NP import Matplotlib.pyplot as PLT from sklearn.cluster import kmeans import random #生成两个 Gaussian distribution training samples are used to test #第一类样本类 mean1 = [0, 0] cov1 = [[1, 0], [0, 1]] # covariance matrix X1, y1= np.random.multivariate_normal (Mean1, COV1, 10 0). T data=[] for x,y in Zip (x1,y1): Data.append ([x,y]) #第二类样本类 mean2 = [3,3] Cov2 = [[1, 0], [0, 1]] # covariance matrix x2, y2= NP . Random.multivariate_normal (Mean2, COV2, 100). T for x,y in Zip (x2,y2): Data.append ([x,y]) random.shuffle (data) #打乱数据 Data=np.asarray (data,dtype=np.float32) #算法 Start #计算两两样本之间的权重矩阵, in the real use scene, a lot of samples, you can only compute the adjacent vertex weight matrix M,n=data.shape distance=np.zeros ((m,m), dtype=np.float32) for I in Range (m): for J in Range (m): If I==j:continue dis=sum ((data[i]-data[j)) **2) Dista Nce[i,j]=dis #构建归一化拉普拉斯矩阵 similarity = Np.exp ( -1.* distance/distance.std ()) for I in Range (m): Similarity[i,i]=0 for I in range (m): Similarity[i]=-similarity[i]/sum (Similarity[i]) #归一化操作 each row of the similarity[i,i]=1# Laplace matrixand is 0, the diagonal element is 1 #计算拉普拉斯矩阵的前k个最小特征值 [Q,v]=np.linalg.eig (similarity) idx = Q.argsort () Q = Q[idx] V = v[:,idx] #前3个最小特征值 num_
    Clusters =3 Newd=v[:,:3] #k均值聚类 CLF = Kmeans (n_clusters=num_clusters) clf.fit (newd) #显示结果 for I in range (data.shape[0)): If Clf.labels_[i]==0:plt.plot (data[i,0], data[i,1], ' go ') elif Clf.labels_[i]==1:plt.plot (data[i 
        , 0], data[i,1], ' ro ') elif clf.labels_[i]==2:plt.plot (data[i,0), data[i,1], ' Yo ' elif clf.labels_[i]==3: Plt.plot (data[i,0], data[i,1], ' Bo ' Plt.show ()
Clustering results:


Reference documents:

1, http://liuzhiqiangruc.iteye.com/blog/2117144

************************ Author: hjimce Contact qq:1393852684 original article, reprint please retain the author, the original address information *************

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.