Original address: Https://www.jiqizhixin.com/articles/2018-04-03-5
K nearest neighbor algorithm, referred to as K-NN. In today's deep-learning era, this classic machine learning algorithm is often overlooked. This tutorial will take you to build the K-nearest neighbor algorithm using Scikit-learn and apply it to the MNIST dataset. Then, the author will take you to build your own K-NN algorithm, and develop a more quasi-faster algorithm than the Scikit-learn k-nn.
1. K Nearest Neighbor Classification model
K-Nearest neighbor algorithm is an easy-to-implement supervised machine learning algorithm, and the robustness of its classification performance is good. One of the biggest advantages of K-NN is that it is an inert algorithm that allows the model to classify data without training, unlike other ML algorithms that need to be trained, such as SVM, regression, and multilayer perceptron.
2. How K-NN Works
To classify a given data point P, the K-NN model first uses a distance metric to compare p to other points in its database.
A distance metric is a standard like Euclidean distance, with two points as input and a simple function that returns the distance between the two points.
Therefore, it can be assumed that two points with a smaller distance have a higher similarity than two points with a greater distance. This is the core idea of K-nn.
The procedure returns an unordered array in which each item in the array represents the distance between P and N data points in the model database. So the size of the returned array is n.
K nearest neighbor's meaning is: K is an arbitrary value (usually between 3-11), indicating how many of the most similar points the model should consider when classifying p. The model then records the K most similar values and uses the voting algorithm to determine which class p belongs to, as shown in.
"Reprint" using Scikit-learn to construct K-nearest neighbor algorithm, classify mnist data set