Improving the clustering accuracy of Kmeans by simulated annealing

Source: Internet
Author: User

1 Kmeans Algorithm principle
K-means algorithm: Input: Number of clusters K, and data containing N data objects. Output: K clusters that meet the minimum variance criteria.
Processing Flow:
(1) selecting K objects from N data Objects as the initial cluster center;
(2) cycle (3) to (4) until each cluster no longer changes
(3) According to the mean value of each cluster object (the center object), the distance between each object and the central object is calculated, and the corresponding object is divided according to the minimum distance;
(4) recalculate the mean of each (changed) cluster (center object)
1.1 Step 1

1.2 Step 2

1.3 Step 3

1.4 Step 4

1.5 Step 5

An introduction to the improvement of 2 K means

The results of K-means are closely related to the selection of initial points, and are often trapped in local optimality.

2.1 Examples

The first 3 center points are randomly initialized, all data points are not clustered, and all are marked red by default, as shown in:

The final result of the iteration is as follows:

  

If the initial point is as follows:

Will eventually converge to such a result:

3 workaround

At present, there are many solutions, but there is no solution to the bottom of the test. In general, the user's initial seed points are randomly given, or based on visual, that is, multiple clustering operations, the selection of the relatively optimal cluster, but this method is not automatic. At present, more research is to combine simulated annealing, genetic algorithm and other heuristic algorithms with Kmeans clustering, which can greatly reduce the dilemma of local optimization. Is the simulated annealing algorithm flowchart.

4 Combat

"On paper to get the final light, I know this matter to preach", only know the principle and not to practice is never deep grasp a certain knowledge. I use C + + to realize the Kmeans algorithm based on simulated annealing and the common Kmeans algorithm, to carry on the comparative analysis.

4.1 Experimental Steps

1) First we randomly generate two-dimensional data points for clustering.

2) results obtained based on the native Kmeans.

3) results obtained from Kmeans based on simulated annealing

4.2 Conclusion

It can be seen from the experimental results that the results of the general error criterion obtained by the K means based on simulated annealing are: Jsa = 19309.9.

The general error criterion obtained by the common K means is as follows: Jnor = 23678.8.

It can be seen that the result of K means based on simulated annealing is better, of course, the complexity of this algorithm is higher, and the time required for convergence is longer.

Improving the clustering accuracy of Kmeans by simulated annealing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.