Original Address http://blog.sina.com.cn/s/blog_62186b460101ard2.html
This is just a matter of turning the more important part.
In addition, there is a http://blog.csdn.net/jwh_bupt/article/details/7685809 on hierarchical clustering.
Cluster analysis groups data Objects (clusters) based only on the information found in the data describing the objects and their relationships . The goal is that objects within a group are similar to each other, and objects in different groups are different. The greater the similarity within the group, the greater the difference between groups, the better the clustering.
The different types of clustering are introduced first, usually in the following categories:
(1) Hierarchical and divided: If a cluster is allowed to have sub-clusters, then we get a hierarchical cluster. Hierarchical clustering is the set family of nested clusters, organized into a tree. partitioning clustering simply divides data objects into non-overlapping subsets ( clusters ), so that each data object is just one sub-set.
(2) mutually exclusive, overlapping, and ambiguous: mutually exclusive means each object is assigned to a single cluster. Overlapping or fuzzy clustering is used to reflect the fact that an object belongs to more than one group at a time. in fuzzy clustering, each data object is 0 and 1 the subordinate weights between the values belong to each cluster. the sum of the subordinate weights of each object to each cluster is often 1.
(3) Complete and partial: complete Clustering assigns each object to a cluster . In some clusters, some objects may not belong to any group, such as some noisy objects.
...
Basic K mean value
According to the algorithm, the following code is implemented:
https://github.com/intergret/snippet/blob/master/Kmeans.py
or http://www.oschina.net/code/snippet_176897_14731.
Condensed Hierarchical clustering
According to the algorithm, the following code is implemented. Calculates the distance from each point pair at the beginning and merges in descending order by distance. In addition, in order to prevent excessive merging, the defined exit condition is that the cluster of 90% is merged, that is, the current cluster number is the 10% of the initial cluster :
https://github.com/intergret/snippet/blob/master/HAC.py
or http://www.oschina.net/code/snippet_176897_14732.
DBSCAN :
According to the algorithm, the following code is implemented:
https://github.com/intergret/snippet/blob/master/Dbscan.py
or http://www.oschina.net/code/snippet_176897_14734.