Spectral Clustering Algorithm _ depth Learning

Source: Internet
Author: User

Original address:http://blog.csdn.net/hjimce/article/details/45749757

Author : HJIMCE

an overview of the algorithm

The spectral clustering algorithm is based on the spectral graph theory, and compared with the traditional clustering algorithm, it has the advantages of clustering on arbitrary shape sample space and converging to the global optimal solution. There are many kinds of methods for solving spectral clustering, among which the nomarlized cut is more simple and commonly used. The algorithm flow is as follows:

1, using KNN search the nearest K-neighbor samples, then we construct the sample similarity sparse matrix W (n,n) (if you do not use KNN, then construct a full join graph, not a sparse matrix, if the sample is more, the solution speed will be very slow), the similarity between the two samples can be measured by the following formula:


This side first defines the diagonal element of W as 0 (wii=0), and then normalized each row and 1 of the W Matrix.

2. After normalized processing of the W matrix, the normalized Laplace matrix L (The diagonal element of the normalized Laplace matrix is 1, all elements of each row and 0), namely:

L=i-w

3, to solve the L matrix of the first K minimum eigenvalue of the corresponding eigenvector (k is the number of clustering), and then put the K-eigenvectors together, forming a new feature vector space data e (n,k) matrix. So e each row corresponds to each sample of the original data, then we do the K-means clustering of the N row data (also can use other clustering method), the result of clustering is the result of spectral clustering.

Second, the source code practice

[Python]  View plain  copy #coding =utf-8   import numpy as np   import  matplotlib.pyplot as plt   From sklearn.cluster   import kmeans    import random      #生成两个高斯分布训练样本用于测试    #第一类样本类    mean1  = [0, 0]   cov1 = [[1, 0], [0, 1]]  #  covariance matrix    X1, y1= np.random.multivariate_normal (mean1, cov1, 100). t      data=[]   for x,y in zip (x1,y1):        data.append ([x,y])    #第二类样本类    mean2 = [3,3]   cov2 =  [[1, 0], [0, 1]]  #  covariance matrix    x2, y2=  Np.random.multivariate_normal (mean2, cov2, 100). t   for x,y in zip (x2,y2):       data.append ([X,y])    random.shuffle (data) #打乱数据    Data=np.asarray (data,dtype=np.float32)                       #算法开始    #计算两两样本之间的权重矩阵, in the real use scenario, A lot of samples, you can only compute the adjacent vertex weights matrix    m,n=data.shape   Distance=np.zeros ((m,m), Dtype=np.float32)       For i in range (m):       for j in range (m):           if i==j:                continue            dis=sum ((Data[i]-data[j]) **2)            distance[ i,j]=dis   #构建归一化拉普拉斯矩阵    similarity = np.exp ( -1.* DISTANCE/DISTANCE.STD ())    For i in range (m):       similarity[i,i]=0

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.