KNN algorithm, and a simple comparison with Kmeans

Source: Internet
Author: User

KNN and Kmeans feel no contact, but the name is quite like, bring together to summarize it.

Beginner's summary.

KNN is supervised learning, Kmeans is unsupervised learning.

KNN is used for classification and Kmeans for clustering.

First say KNN:

For KNN, there are a number of training samples labeled Good label, the data of this batch of samples converted to vector representation, and then select the method of measuring vector distance. such as European distance, Manhattan distance, pin cosine and so on. For this batch of samples is recorded as W.

Then, for a sample to be classified s, select the nearest K sample from the distance sample s from W. In this k sample, which category is more than one, then the classification of the sample S is what kind.

The advantages and disadvantages of KNN:

The advantages of KNN:

1, no assumptions on the input data, for example, do not assume that the input data is subordinate to the distribution.

2, simple algorithm, intuitive, easy to implement

3, not sensitive to outliers

4, can be used for numerical data, can also be used for discrete data

The disadvantages of KNN:

1, there is a high degree of computational complexity, but this can be improved, such as KD number, or ball tree

2, seriously dependent on the training sample set, this feeling there is no improvement method, can only be as far as possible to obtain a better set of training samples.

3, distance measurement method, the selection of K value has a relatively large impact. The KNN algorithm must specify K value, if the K value is not selected, the classification precision cannot guarantee

4. Compared with the decision tree induction method and the neural network method, the traditional nearest neighbor classifier thinks that the function of each attribute is the same (giving the same weight). The distance of the sample is calculated based on all the characteristics (attributes) of the sample. In these features, some characteristics are strongly correlated with the classification, some are weakly correlated with the classification, and some (perhaps most) are not related to the classification. Thus, when calculating similarity, it is misleading to classify the sample similarity by all features.

The improvement direction of KNN:

The improved method of KNN classification algorithm can be divided into the speed of classification, the maintenance of training sample library, the optimization of distance formula of similarity and the determination of four kinds of k value. At present I know only to speed up the classification, through KD tree, ball tree and so on. Machine learning in combat book says K not more than 20

KNN algorithm, and a simple comparison with Kmeans

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.