Python implements the k-means algorithm and pythonk-means algorithm.
The examples in this article share the specific code for implementing the k-means algorithm in Python for your reference. The specific content is as follows:
This is also exercise 9.4 of Zhou Zhihua's machine learning.
The dataset is watermelon dataset 4.0, as shown below:
Serial number, density, sugar content
1, 0.697, 0.46
2, 0.774, 0.376
3, 0.634, 0.264
4, 0.608, 0.318
5, 0.556, 0.215
6, 0.403, 0.237
7, 0.481, 0.149
8, 0.437, 0.211
9, 0.666, 0.091
10, 0.243, and 0.267
11, 0.245, 0.057
12, 0.343, 0.099
13, 0.639, and 0.161
14, 0.657, and 0.198
15, 0.36, 0.37
16, 0.593, 0.042
17, 0.719, and 0.103
18, 0.359, 0.188
19, 0.339, and 0.241
20, 0.282, and 0.257
21, 0.784, 0.232
22, 0.714, and 0.346
23, 0.483, and 0.312
24, 0.478, and 0.437
25, 0.525, 0.369
26, 0.751, 0.489
27, 0.532, 0.472
28, 0.473, 0.376
29, 0.725, 0.445
30, 0.446, 0.459
The algorithm is very simple, so it is not explained, and the code is not complicated. put it directly:
#-*-Coding: UTF-8-*-"Excercise 9.4" "import numpy as npimport pandas as pdimport matplotlib. pyplot as pltimport sysimport randomdata = pd. read_csv (filepath_or_buffer = '.. /dataset/watermelon4.0.csv ', sep =', ') [["density", "sugar content"]. values ####################################### ### K-means ################################## ##### k = int (sys. argv [1]) # Randomly choose k samples from data as mean vectorsmean_vectors = random. sample (data, k) def dist (p1, p2): return np. sqrt (sum (p1-p2) * (p1-p2) while True: print mean_vectors clusters = map (lambda x: [x]), mean_vectors) for sample in data: distances = map (lambda m: dist (sample, m), mean_vectors) min_index = distances. index (min (distances) clusters [min_index]. append (sample) new_mean_vectors = [] for c, v in zip (clusters, mean_vectors): new_mean_vector = sum (c)/len (c) # If the difference betweenthe new mean vector and the old mean vector is less than 0.0001 # then do not updata the mean vector if all (np. divide (new_mean_vector-v), v) <np. array ([0.0001, 0.0001]): new_mean_vectors.append (v) else: new_mean_vectors.append (new_mean_vector) if np. array_equal (mean_vectors, outputs): break else: mean_vectors = new_mean_vectors # Show the clustering resulttotal_colors = ['R', 'y', 'G', 'B', 'C ', 'M', 'K'] colors = random. sample (total_colors, k) for cluster, color in zip (clusters, colors): density = map (lambda arr: arr [0], cluster) sugar_content = map (lambda arr: arr [1], cluster) plt. scatter (density, sugar_content, c = color) plt. show ()
Running Mode: Enter python k_means.py 4 in the command line. 4 is k.
Below are the running results of k equal to 3, 4, and 5, respectively. Because the mean vectors at the beginning are random, the results of each running will be different.
The above is all the content of this article. I hope it will be helpful for your learning and support for helping customers.