Principle
KNN algorithm, also known as K-nearest neighbor algorithm. is in the training set of data and tags are known, input test data, the characteristics of the test data and training set the corresponding characteristics of the comparison, to find the most similar training set with the first K data, the test data corresponding to the category is K data in the most occurrences of the classification, the algorithm is described as:
- 1) Calculate the distance between the test data and each training data;
- 2) Sort by the increment relation of distance;
- 3) Select K points with a minimum distance;
- 4) Determine the occurrence frequency of the category of the first k points;
- 5) return the category with the highest frequency in the first K points as the predictive classification for the test data.
Three elements:
- Selection of K-values
- Measurement of distance (common distance measurement with European distance, Manhattan distance, etc.)
- Classification decision rules (majority voting rules)
Selection of K-values
- A smaller K value indicates that the model is more complex and easier to fit
- But the greater the K value, the simpler the model, and if k=n indicates that the class with the most classes in the training set is the most
So generally k takes a smaller value and then uses cross-validation to determine
Here the so-called cross-validation is to divide the sample into a prediction sample, such as 95% training, 5% predictions, and then k to take 1,2,3,4,5 and the like, to predict, calculate the final classification error, select the smallest error K
The return of KNN
After finding the nearest K-instance, you can calculate the average of the K-instances as the predicted values. Or you can add a weight to the K-instance and then average, which is inversely proportional to the metric distance (the closer the weight is).
Advantages and Disadvantages
The advantages of KNN algorithm:
- Simple thinking, mature theory, can be used to do the classification can also be used to do regression;
- Can be used for nonlinear classification;
- The complexity of training time is O (n);
- High accuracy, no assumptions on data, no sensitivity to outlier;
Disadvantages:
- Large computational capacity;
- Sample imbalance problem (that is, there are a large number of samples in some categories, while the number of other samples is very small);
- Requires a lot of memory;
Machine Learning-KNN algorithm