KNN algorithm is the simplest algorithm for machine learning, it can be considered as an algorithm without model, and it can be considered as the model of data set.
Its principle is very simple: first calculate the predicted point and all the points of the distance, and then from small to large sorted before the K minimum distance corresponding points, statistics before k points corresponding to the number of labels, take the largest number of LABEKL as the predicted value
The code sample is as follows:
First import the required NumPy libraries and matplotlib libraries
1 Import NumPy as NP 2 import Matplotlib.pyplot as Plt
To create your own data set
1 data_x = [[0,0],[0,1],[1,0],[1,1],[5,5],[5,6],[6,5],[6,62 data_y = [0,0,0,0,1,1,1,1]
convert datasets to Numpy.array format for easy processing
1 x_train = Np.array (data_x)2 y_train = Np.array (data_y)
Create a point for prediction
1 x_test = Np.array ([3.5,3.5])
Draw a scatter plot of data.
Plt.scatter (x_train[y_train==0,0],x_train[y_train==0,1],color='g', marker='o') Plt.scatter (X_train[y_train==1,0],x_train[y_train==1,1],color='b', marker='+') Plt.scatter (x_test[0],x_test[1],color='R', marker='x') Plt.xlabel ('x') Plt.ylabel ('y') plt.show ()
Calculate Euclidean distance
1 for inch X_train]
Sort the distance from small to large, and return the index value
1 nearest = Np.argsort (distances)
Returns the category of the nearest point of the first K (k=4) distance
1 k = 42 for in Nearest[:k]]
Calculates the number of labels that correspond to the nearest point of the K distance
1 from Import Counter 2 votes = Counter (topk_y)
Returns the highest number of labels as the predicted value
1 predict_y = Votes.most_common (1) [0][0]
K-nearest Neighbors (KNN) k Nearest Neighbor algorithm