A Clustering Algorithm Based on Mercer kernel functions

Source: Internet
Author: User

 

Traditional Features and features of this algorithm

The traditional C-means clustering algorithm does not optimize the sample features and directly uses samples to wake up the clustering. In this way, the effectiveness of these methods depends largely on the distribution of samples.

 

 

Distance selection

We assume that sample X is mapped to a high-dimensional feature space by the nonlinear function der (x), then our Euclidean distance is:

Distence (x, y) = SQRT (LEN (DER (x)-der (y) = SQRT (DER (x) * der (x) + der (y) * der (y)-2 * der (x) * der (y ))

Obviously, if I set K (x_ I, X_j) = der (x_ I). * der (x_y), there are:

Distence (x, y) = SQRT (k (x, x)-2 * k (x, y) + K (Y, y ));

 

In this way, we map the nonlinear function der to K (Binary scalar function.

 

K function Selection

This part of the theory is deep, so I will give a few simple examples:

(1) polynomial kernel function: k (x, y) = (X. * Y + 1) ^ d; D is an integer.

(1) Gaussian Kernel Function: k (x, y) = exp (-A * Len (x-y); a> 0.

(1) Two-layer Neural Network kernel function: k (x, y) = Tanh (-B (X. * Y)-C ).

 

Clustering Algorithm

(1) determine the number of classes num_class;

(2) determine the initialization cluster center [k] [I], the K iteration center of class I, I = 1... num_sample;

(3) determine the dependent matrix. Check whether the matrix_class [J] [I] J sample is in Class I.

(4) modify the kernel function matrix.

Avg_dis_between_center_sample (I: Center) = sum_j_from_to (1, num_sample, matrix_class [J] [I] * K (x_ I, X_j)/sum_j_from_to (1, num_sample, matrix_class [J] [I]);

 

Avg_is_between_sample_sample (I: Center) = sum_ I _from_to (1, num_sample, sum_j_from_to (1, num_sample, matrix_class [J] [I] * K (x_ I, X_j ))) /(sum (1, num_sample, matrix [J] [I]) ^ 2;

 

(5) calculation error:

A class on a digital axis: -------- A--B----C ------>

For a class: A indicates the current sample distance, B indicates the current intra-Sample distance, and C indicates the previous sample distance.

 

Therefore, the error is: e_ I = equals (I, k + 1)-avg_dis_between_center_sample (I, K) + avg_dis_between_center_sample (I. K)-avg_is_between_sample_sample)

 

K is an algebra.

 

The total error is: E = sum (e_ I );

(6) If the total error e <Emax ends, no is transferred to 3.

 

The clustering result is obviously in matrix_class.

 

 

Code

I will add that this article is very detailed. You can implement the following on your own.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.