This is a clustering algorithm for Kmean,
In short, the weighted sum of distances to the center point
It looks great.
It's not a bad thing to write.
A random pick-up point
Import NumPy as NP Import Cv2 from Import = Np.random.randint (25,50, (25,2= Np.random.randint (60,85, (25,2= Np.vstack ((x, y)) # convert to np.float32Z = Np.float32 (z) plt.hist (z,100,[0,100]), Plt.show ()
Second, Kmean part
Call the Kmean in the Cv2 library.
Marking A and B classes
# define criteria and apply Kmeans ()criteria = (cv2. Term_criteria_eps + CV2. Term_criteria_max_iter, 1.0) ret,label,center=cv2.kmeans (z,2,none,criteria,10, Cv2. Kmeans_random_centers)# Now separate the data, Note the Flatten ()A = Z[label.ravel ()= = Z[label.ravel () ==1]
Iii. Results of Clustering
Drawing Drawing and drawing
# Plot The Dataplt.scatter (a[:,0],a[:,1]) plt.scatter (b[:,0],b[:,'r' ) Plt.scatter (center[:,0],center[:,'y's' ) ) Plt.xlabel ('Height'), Plt.ylabel ('Weight ' ) plt.show ()
--------------------------------------------------------------------------------------------------------------- ---------------------------------------
At last
Code Summary
Import NumPy as Npimport cv2from matplotlib import pyplot as PltX = Np.random.randint (25,50, (25,2)) Y = Np.random.randint (6 0,85, (25,2)) Z = Np.vstack ((x, y)) # Convert to np.float32z = Np.float32 (Z) plt.hist (z,100,[0,100]), Plt.show () # define Criteria and apply Kmeans () criteria = (CV2. Term_criteria_eps + CV2. Term_criteria_max_iter, 1.0) Ret,label,center=cv2.kmeans (z,2,none,criteria,10,cv2. Kmeans_random_centers) # Now separate the data, Note the flatten () A = Z[label.ravel () ==0]b = Z[label.ravel () ==1]# Plot the Dataplt.scatter (a[:,0],a[:,1]) plt.scatter (b[:,0],b[:,1],c = ' R ') plt.scatter (center[:,0],center[:,1],s = 80,c = ' Y ', Marker = ' s ') Plt.xlabel (' Height '), Plt.ylabel (' Weight ') plt.show ()
Machine learning notes about Python implementation Kmean algorithm