Clustering analysis is an important area of non-supervised learning. The so-called unsupervised learning, is that the data is no category tag, the algorithm from the exploration of the original data to extract a certain law. Clustering is an attempt to divide a sample in a dataset into several disjoint subsets, each of which is called a "cluster". The following is a comparison of the various clustering algorithms in Sklearn.
Kmeans
The Kmeans algorithm, given a number k, is able to divide the data set into K "Clusters" c={c1,c2,⋯,ck} \mathcal C = \{c_1, c_2,\cdots, c_k\} Regardless of whether the classification is reasonable or meaningful. The algorithm needs to minimize the squared error:
E=∑i=1k∑x∈ci∥x−μi∥2 (1) E = \sum_{i=1}^k\sum_{x \in c_i} \vert x-\mu_i \vert ^2 \quad \quad \quad (1)
where μi=1| Ci|∑x∈cix \mu_i = \frac{1}{\vert C_i\vert}\sum_{x \in c_i}x is the mean vector, or centroid, of the cluster Ci c_i. where ∥x−μi∥2 \vert x-\mu_i \vert ^2 represents the distance (in fact, the norm) of each sample point to the mean point. Here's a little bit of distance measurement.
The most common distance metric is the Minkowski distance (i.e. the P-norm), which is
DISTMK (XI,XJ) = (∑u=1n|xiu−xju|p) 1/p (2) Dist_{mk} (x_i, X_j) = \big (\sum_{u=1}^n \vert x_{iu}-X_{ju} \vert ^p\Big) ^{1/p}\ Quad \quad \quad (2)
When p==2, Minkowski distance is Euclidean distance (2 norm)
When P==1, Minkowski distance is Manhattan distance (1 norm or cityblock distance)
The above is for numeric attributes, and for some discrete attributes there is also a definition of the distance associated with it. Finally in the real data if you want to determine the appropriate distance calculation, can be achieved by "distance measurement learning".
That is to say the above formula (1) is to find K clusters, so that in each cluster, all the sample points are as close as possible.
the basic algorithm flow of Kmeans is described below
Input: Sample Data Set D