Outlier detection method based on nearest neighbor

Source: Internet
Author: User

Write a neighbor-based outlier method today. Due to the dimension disaster, this method is used cautiously on the high dimension. Moreover, this method is not applicable for data with multiple cluster and large density differences.

The idea of this method is as follows:

1) Calculate the number of neighbors within the radius radius with each sample

2) If the number of a sample neighbor is less than the specified nearest neighbor minimum number minpts, then outlier

In general the so-called Outler, the number will be very small, and the value will be very different from normal. Otherwise it could be a cluster problem. So here's the default minpts I set to 3. What about radius? In Sklearn.cluster, there is a function estimate_bandwidthfor Meanshift estimating bandwidth. The main function of this function is to calculate the maximum distance between each point and the nearest neighbor of the quantile ratio, and then the average maximum nearest neighbor distance for all points

    NBRs = Nearestneighbors (N_neighbors=int (x.shape[0] * quantile))    Nbrs.fit (X)    = 0.      for  in Gen_batches (Len (X), $):        = Nbrs.kneighbors (X[batch,:], return_distance=True)        + = Np.max (d, Axis=1). SUM (    )return bandwidth/x.shape[0]

The default quantile of this estimate_bandwidth is 0.3, which is the average maximum distance of the surrounding 30% neighbors. Such a large percentage of neighbors, plus the maximum distance obtained, is used here to estimate that our radius should be feasible. Of course the user can also give radius.

Then the RADIUS-based K-Nearest neighbor is calculated. Here we are based on the Sklearn to achieve. Since Sklearn's radius_neighbors will contain itself, subtract 1 from the last neighbor count (minus itself)

 from Import  def  Calneighborsbyradius (x, RADIUS):    = nearestneighbors (  )    nhbr.fit (x)             = nhbr.radius_neighbors (X, radius, return_distance = False)    = Np.array ([Len (i)-1  
   
    for
     in
     indices])    
    return neighborcounts
   

Then judging by the minpts and the nearest neighbor count is not outlier

def detectoutliers (x, radius = None, minpts = 3 ):    = Calneighborsbyradius (x, radius)    = Np.array (Neighborcounts < minpts, np.int)    return

Finally, for an example of the previous article, let's look at how the method works. Green means outlier, all caught out. Here the data made the dispersion standardized. Personal experience, in the detection of outliers in the scene, the dispersion standardization is more accurate than z-score standardization to respond to outlier problems.

Code is designed to be easy to display, independent of operational efficiency

Outlier detection method based on nearest neighbor

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.