KNN is one of the simplest classification methods. The idea of this method is: if most of the k most similar samples in the feature space (that is, the most adjacent samples in the feature space) belong to a certain category, the sample also belongs to this category. In KNN algorithm, the selected neighbors are objects that have been correctly classified. This method only determines the category of the samples to be classified based on the class of one or more adjacent samples. Reference,
Http://www.cnblogs.com/biyeymyhjob/archive/2012/07/24/2607011.html
Http://blog.csdn.net/aladdina/article/details/4141127
(The online references are basically the same as the two)
Since the KNN method mainly relies on a limited number of adjacent samples, rather than the method for determining the category of the class, for the class set with many overlapping or overlapping, KNN is more suitable than other methods.
The main disadvantage of this algorithm in classification is that when samples are unbalanced, for example, the sample size of a class is large, while that of other classes is small, it is possible that when a new sample is input, the samples with a large capacity class in the K neighbors of the sample account for the majority. Therefore, we can use the method of weight (a large neighbor weight with a small distance from the sample) to improve. Another disadvantage of this method is that the calculation workload is large, because the distance from the text to all known samples must be calculated for each text to be classified before K Nearest Neighbor points can be obtained. Currently, the common solution is to edit known sample points in advance and remove samples that do not have a significant effect on classification. This algorithm is more suitable for automatic classification of class domains with a large sample size, and the class domains with a small sample size use this algorithm to easily produce false scores.
Consider Algorithm Implementation,
KNN classifies new input points based on existing classifications. Calculate the first k minimum distances.
To save the first k minimum distances, use the max heap,
Each time you compare the new distance with the heap top element, if it is smaller than the heap top element, the heap top element will pop up and the new element will be placed into the heap.
Basic operations on Drawing
Bytes:
K-means and k-nearest neighbor are the simplest machine learning algorithms. As long as you know the process, you can write them out without the support of complex data structures.
Currently, both algorithms are implemented. The test results are as follows,
The solid points in the figure are randomly generated test points. After classification by k-means, different clusters are marked with different colors. Large circles are randomly generated test points, which are divided into existing clusters by k-nearest neighbor Algorithm. From the figure above, the results are good.
This is just the simplest implementation. For complex situations, these algorithms will make some modifications, but these modifications have not been studied or implemented here.
Next, we are still looking for a simple start.