K Nearest Neighbor algorithm-weighted KNN

Source: Internet
Author: User

Weighted KNN

  The previous article mentions adding a weight to the distance for each point, so that points closer to each other can get a greater weight, describing how the weights are weighted.

Inverse function

  The simplest form of the method is to return the reciprocal of the distance, such as the distance d, the weight 1/d. Sometimes the weight of a product that is exactly the same or very close will be large or even infinite. For this reason, a constant is added to the distance at the inverse of the distance:

Weight = 1/(distance + const)

The potential problem with this approach is that it assigns a lot of weight to the nearest neighbor, and a little farther will decay quickly. While this situation is desirable, it can sometimes make the algorithm more sensitive to noise data.

Gaussian function

The Gaussian function is more complex, but overcomes the disadvantages of the preceding function, its form:

where A,b,c∈r

The graph of the Gaussian function resembles a hanging clock in shape. A is the height of the curve, B is the offset of the centerline of the curve at the x-axis, and C is the half-peak width (the width of the function at half the peak).

Half-peak width

def Gaussian (Dist, a=1, b=0, c=0.3):    return A * MATH.E * * (-(dist-b) * * 2/(2 * c * * 2))

The Gaussian function above has a weight of 1 when the distance is 0, the weight decreases as the distance increases, but does not change to 0. Is the difference between a Gaussian function and several other functions, and the weights fall below 0 or 0 when the distance increases to a certain extent.

Calculation process

Weighted KNN first obtains the sorted distance value, then takes the nearest K element.

1. When dealing with discrete data, the K-data is treated with a weighting that predicts the same probability as the label of the nth data:

  2. When dealing with numerical data, it is not simply averaging the K data, but weighted averaging: multiply the result by multiplying the distance value of each item by the corresponding weight. After finding the sum, divide it by the weight of the ownership.

DI represents the distance between the nearest neighbor I and the predicted value X, and WI represents its weight, and f (x) is the numerical result of the prediction. Each time a new sample is expected to belong to the category, the overall sample will be traversed, it can be seen that the KNN efficiency is actually very low.

I'm a 8-bit.

Source: Http://www.cnblogs.com/bigmonkey

This article to study, research and sharing-based, if necessary reprint, please contact me, marked the author and source, non-commercial use!

K Nearest Neighbor algorithm-weighted KNN

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.