K Nearest Neighbor Algorithm--KNN

Source: Internet
Author: User

The core idea of the KNN (K-nearest Neighbor) algorithm is that if the majority of the K nearest samples in a feature space belong to a category, the sample also falls into this category and has the characteristics of the sample on this category. This method determines the category to which the sample is to be divided, depending on the category of one or more adjacent samples in determining the classification decision. So the special is that it does not need training, easy to understand, easy to implement.

In KNN, the distance between objects is calculated as the similarity index between objects, where Euclidean distance or Manhattan distance is generally used:

The whole KNN algorithm process can be described as: input test data, the characteristics of the test data and training set the corresponding characteristics of the comparison, to find the most similar training set with the first K data, the test data corresponding to the category is K data in the most occurrences of the classification, the algorithm is described as:

The distance between the test data and the training data is calculated, and the minimum distance of K points is selected, and the frequency of the category of the first k points is determined; the category with the highest frequency in the first K points is returned as a predictive classification of the test data.

From the KNN algorithm thinking, the algorithm in the classification of the main problem is that when the sample is unbalanced, such as a class of sample capacity is very large, and other class sample capacity is very small, it is possible that when a new sample is entered, the sample of the K neighbors of the large class of samples accounted for a majority. The algorithm calculates only the "nearest" neighbor sample, the number of samples is large, or the sample is not close to the target sample, or the sample is close to the target sample. In any case, the quantity does not affect the running result. But there is a flaw in this experiment that there are few individual categories of text. So there is such a disadvantage. Another disadvantage is that the computational amount is large, because each of the text to be classified is calculated to the distance of all known samples in order to obtain its K nearest neighbor.

K Nearest Neighbor Algorithm--KNN

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.