In a blog post on radial basis neural network machine learning radial basis neural network (RBF NN) has already described the nearest neighbor, but wrote that some of the focus of the RBF is not prominent enough, so, here again to the nearest neighbor and K nearest neighbor of the basic idea of the introduction, concise and brief summary.
The basic idea of the nearest neighbour
Save all the observed tagged samples, and then on the new test sample, find the label sample closest to the test sample in the label sample set, and then use the label sample as the output of the test sample. This is a typical supervised learning. Has a very important application in machine learning. Just for the nearest neighbor, the training seems to have changed its meaning, it barely trained, just stored the observed samples and tags, and did not learn anything hypothesis. So it counts as a very lazy learning algorithm. In training lazy, it is hard to test, because it needs to calculate the similarity between each label sample and the input test sample, the operation cost is larger. This reflects the ax in the knife and the difference between sharpening: If the knife, in the wood when the province of the matter, if not sharpening, in the time of firewood will have to be laborious.
With a little expansion, we select the most similar k neighbors and then vote by a K-neighbor, or linear fusion, and then output, such a model is called the K nearest neighbor model. In practical applications, the robustness of K-nearest neighbor is much better than that of nearest neighbor. In fact, the similarity of K-nearest neighbor is fused with the voting weight, which is quite similar to the Monte Carlo method in statistical analysis.
***********************************
2015-8-7
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Nearest neighbor and K nearest neighbor algorithm thought