Review summary of K nearest neighbor (KNN)

Source: Internet
Author: User

Summary:

1. Algorithm overview

2. Algorithm derivation

3. Algorithm features and advantages and disadvantages

4. Precautions

5. Implementation and specific examples

6. Applicable occasions
Content:

1. Algorithm overview

K-Nearest Neighbor algorithm is a basic classification and regression method, according to its K nearest neighbor training instance category, through a majority of votes and other ways to predict; K-nearest Neighbor method actually uses the training data set to divide the eigenvector space, and as the "model" of its classification. (Cover and Hart at 1968)--reference from the statistical learning method

2. Algorithm derivation

2.1 KNN three elements

K Value selection: When the K value is small, the prediction result is very sensitive to the neighbor's instance point, which is prone to overfitting; if the K value is too large the model will tend to be large, easy to fit; usually K is an integer not greater than 20 (refer to "machine learning Combat")

Distance measurement: The nearest neighbor points determined by the different distance measures are different

Classification decision rule: Majority voting (majority voting) rule is the loss function is the 0-1 loss function is the experience of risk minimization

2.2 kd Tree: A binary tree that solves a fast search for K-nearest neighbors, and the construction of KD trees is equivalent to continuously dividing the K-dimensional space with the super-plane perpendicular to the axis, constituting a series of k-dimensional hyper-matrix regions; Each node corresponds to a K-dimensional hyper-rectangular region. In general, the sequential selection of the axis and the median of the axis to slice. KD trees are balanced but not efficient--reference from the statistical learning method

3. Algorithm features and advantages and disadvantages

Advantages: High accuracy, insensitive to outliers

Cons: K-value sensitive, high spatial complexity (need to save all data), high time complexity (average O (LOGM), M is the number of training set samples)

4. Precautions

Normalization: A function based on distance, to be normalized; otherwise it may cause distance calculation to fail.

5. Implementation and specific examples

KD Tree for nearest neighbor search ("Statistical learning Method" algorithm 3.3)

An example of improving dating site pairing index and handwriting recognition in machine learning combat (NumPy implementation, not using KD tree)

Implementation and specific examples in Scikit-learn

6. Applicable occasions

Support for large-scale data: single-machine time and space consumption is large, but can be distributed solution (a spark KNN found on GitHub implementation, time study)

Feature dimension

Whether there is an Online algorithm: there should be (to be determined)

Feature processing: Support for numeric data, category type requires 0-1 encoding

  

Review summary of K nearest neighbor (KNN)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.