<span style= "font-family:arial, Helvetica, Sans-serif; Background-color:rgb (255, 255, 255); " > Today and the small partners to share the clustering algorithm and the implementation of the R language, the previous section and we shared the cluster of distance, the distance between classes and the most classical hierarchical clustering method, today and we share several dynamic clustering algorithm. </span>
First and everyone to share was rated as one of the top ten data mining algorithm K-means algorithm (k for the number of categories, mean as the average, the difficulty of the algorithm is K's pointing)
STEP1: Select K points as the initial centroid; STEP2: Assigns each remaining point to the nearest centroid to form K clusters (clusters); STEP3: Recalculates the centroid of the cluster (coordinate mean); STEP4: Repeat 2-3 until the centroid does not change;
Next look at how the R language implements K-means:
X=iris[,1:4]km=kmeans (x,3) #数据库 + number of categories
The power of the R language is visible, but the designation of K is indeed very skillful, and a hierarchical clustering method can be done first, and the number of K will be relatively suitable.
K-means Advantages:
Efficient and not susceptible to initial value selection;
Disadvantages:
1, can not deal with non-spherical clusters;
2, can not deal with different scales, different densities of clusters (cluster ball size level is not aligned);
3, outliers may have greater interference (so first to remove)
Density-Based approach: DBSCAN (density-based Spatial Clustering of applications with Noise)
The above mentioned K-means is more suitable for spherical distribution cases, the point group shown is not effective, followed by the introduction of another clustering method to solve such problems.
Start by explaining the basic concepts:
R-Neighborhood: the area within the radius r of the given point
Core point: If the R-neighborhood of a point contains at least a minimum number of M points, the point is said to be the core point
Direct density up to: if the point P in the R-neighborhood of the core Q, then p is from Q can be directly density can reach
If there is a point chain p1,p2, ..., pn,p1=q,pn=p,pi+1 is from Pi about R and M direct density can be reached, then the point P is from Q about R and M density can be reached (see), note that the density can be one-way.
Ps: The density can reach that can be included in the same class;
If the point O is present in the sample set D, so that the point P, Q is from O about R and M density, then the point P, Q is about R and m density connected (see)
Dbscan Basic algorithm:
STEP1: Specify the appropriate R and M;STEP2: calculates all the sample points, if there are more than M points in the R neighborhood of the point P, creates a new cluster with p as the core point; Step3: repeatedly looking for points in which the core points are directly dense (which can then be density), adding them to the corresponding clusters, For clusters with a "density-connected" condition at the core point, a merge Step4 is given: When no new points can be added to any cluster, the end of the algorithm Ps:dbscan is sensitive to the user-defined parameters, and the subtle differences can lead to very different results, and the selection of parameters is irregular and can only be determined by experience.
The R language is implemented as follows:
Install.packages (FPC) library (FPC) Iris.data<-iris[,-5]ds<-dbscan (Iris.data,1.5,minpts=30,scale=true, Showplot=true,method= "Raw")
Results such as:
R language and data analysis four: Clustering algorithm 2