--python realization of KNN classification algorithm

Source: Internet
Author: User

One, KNN algorithm analysis

The k nearest neighbor (k-nearest NEIGHBOR,KNN) classification algorithm can be said to be the simplest machine learning algorithm. It is classified by measuring the distance between different eigenvalue values . Its idea is simple: if a sample is the most similar in the K in the feature space (that is, the nearest neighbor in the feature space), the sample belongs to that category.

In the KNN algorithm, the selected neighbors are the objects that have been correctly categorized. This method determines the category to which the sample is to be divided based on the category of the nearest one or several samples in the categorical decision-making. The KNN method is more suitable than other methods because the KNN method mainly relies on the surrounding finite sample , rather than the Discriminant class domain method to determine the category of the class.

The main disadvantage of this algorithm is that when the sample is unbalanced, the sample size of a class is large, and the other class sample capacity is very small, it is possible that when a new sample is entered, the sample of the large-capacity class in the K-neighbor of the specimen is the majority. Therefore, the method of weight can be used (and the value of the neighbor with small distance of the sample is large) to improve. Another disadvantage of this method is that it is computationally large because each text to be classified is calculated from its distance to all known samples in order to obtain its K nearest neighbors. At present, the common solution is to pre-edit the known sample points in advance to remove the small sample of the role of classification. This algorithm is suitable for the automatic classification of the class domain with large sample capacity, while those with small sample capacity are more prone to error points by using this algorithm [refer to machine learning ten algorithms].

All in all, we already have a tagged database, and then we enter new data with no tags, then we compare each feature of the new data to the one that corresponds to the data in the sample set, and then the algorithm extracts the most similar (nearest neighbor) category labels in the sample set. In general, only the first K most similar data in a sample database is selected. Finally, select the most frequently occurring categories in the K most similar data. The algorithm is described as follows:

1) Calculate the distance between the point in the data set of the known category and the current point;

2) Sort by the increment order of distance;

3) Select K points with the minimum distance from the current point;

4) Determine the occurrence frequency of the category of the first k points;

5) returns the category with the highest frequency of the first K points as the predicted classification of the current point.

--python realization of KNN classification algorithm

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.