Process analysis of SMO algorithm for support vector machine

Source: Internet
Author: User
Tags diff svm

1.SVM final optimization problem for even function

2. Caching of kernel functions

Because the matrix is a symmetric matrix, the footprint in memory can be M (m+1)/2

The mapping relationship is:

[CPP]View PlainCopy
    1. #define  OFFSET (x, y)       ((x)  >  (y)  ?  ((((((x) +1) * (x)  >> 1)  +  (y)   :  (((y) +1) * (y)  >> 1)  +  (x))   
    2. // ...   
    3.     for  (unsigned  i = 0; i < count; ++i)   
    4.          for  (unsigned j = 0; j <= &NBSP;I;&NBSP;++J)   
    5.              cache[offset (i, j)] = y[i] * y[j] * kernel (X[i],  x[j], dimision);   
    6. //...  

3. Solving gradients

Since the alpha value is a variable, the alpha value is derivative and the alpha value is then selected to be optimized by the gradient.

Gradient:

[CPP]View PlainCopy
    1. for (unsigned i = 0; i < count; ++i)
    2. {
    3. Gradient[i] =-1;
    4. For (unsigned j = 0; J < count; ++j)
    5. Gradient[i] + = Cache[offset (i, J)] * Alpha[j];
    6. }

Shut Up w Max, when α is reduced, g the bigger the better. Conversely, g the smaller the better.

4. Constraints of the Sequence minimization method (SMO)

Each time a 2 α value is selected for optimization, the other alpha values are treated as constants, depending on the constraint conditions:

After the optimization:

5. Make selection Rules

Because the range of α is in the interval [0,c], α is bound by α

If the selected and the different number, that is, λ=-1, then the same as the increase and decrease

Assume

If so, you should now select

The above propositions can be translated into (note: and equivalence)

If the selected and the same number, that is, λ=1, then and the increase and decrease of the differences

If, then, it should be selected at this time,

The above propositions can be translated into (note: and equivalence)

Or

The above conclusions can be collated (for the sake of simplicity here only the symbol before G and the symbol of y is different)

[CPP]View PlainCopy
  1. unsigned x0 = 0, x1 = 1;
  2. Optimized alpha value based on gradient selection
  3. {
  4. Double gmax =-dbl_max, gmin = Dbl_max;
  5. For (unsigned i = 0; i < count; ++i)
  6. {
  7. if ((Alpha[i] < C && Y[i] = = POS | | alpha[i] > 0 && y[i] = NEG) &&-y[i] * Gradient[i] > Gmax)
  8. {
  9. Gmax =-y[i] * Gradient[i];
  10. x0 = i;
  11. }
  12. Else if ((Alpha[i] < C && Y[i] = = NEG | | alpha[i] > 0 && y[i] = POS) &&-y[i] * GRA Dient[i] < gmin)
  13. {
  14. Gmin =-y[i] * Gradient[i];
  15. x1 = i;
  16. }
  17. }
  18. }

6. Start the solution

Alpha requires that the alpha value of the non-conforming condition be adjusted within the interval [0,c], with the following adjustment rules.

In 2 cases, if λ=-1, namely:

After substituting:

[CPP]View PlainCopy
  1. if (y[x0]! = y[x1])
  2. {
  3. Double coef = cache[offset (x0, x0)] + cache[offset (x1, x1)] + 2 * cache[offset (x0, X1)];
  4. if (coef <= 0) Coef = dbl_min;
  5. Double delta = (-gradient[x0]-gradient[x1])/coef;
  6. double diff = alpha[x0]-alpha[x1];
  7. ALPHA[X0] + = Delta;
  8. ALPHA[X1] + = Delta;
  9. unsigned max = x0, min = x1;
  10. if (diff < 0)
  11. {
  12. max = x1;
  13. min = x0;
  14. diff =-diff;
  15. }
  16. if (Alpha[max] > C)
  17. {
  18. Alpha[max] = C;
  19. Alpha[min] = C-diff;
  20. }
  21. if (Alpha[min] < 0)
  22. {
  23. Alpha[min] = 0;
  24. Alpha[max] = diff;
  25. }
  26. }

If λ=1, that is:

[CPP]View PlainCopy
  1. Else
  2. {
  3. Double coef = cache[offset (x0, x0)] + cache[offset (x1, x1)]-2 * cache[offset (x0, X1)];
  4. if (coef <= 0) Coef = dbl_min;
  5. Double delta = (-gradient[x0] + gradient[x1])/coef;
  6. double sum = alpha[x0] + alpha[x1];
  7. ALPHA[X0] + = Delta;
  8. ALPHA[X1]-= Delta;
  9. unsigned max = x0, min = x1;
  10. if (alpha[x0] < alpha[x1])
  11. {
  12. max = x1;
  13. min = x0;
  14. }
  15. if (Alpha[max] > C)
  16. {
  17. Alpha[max] = C;
  18. Alpha[min] = sum-c;
  19. }
  20. if (Alpha[min] < 0)
  21. {
  22. Alpha[min] = 0;
  23. Alpha[max] = sum;
  24. }
  25. }



Then adjust the gradient, adjust the formula as follows:

[CPP]View PlainCopy
    1. for (unsigned i = 0; i < count; ++i)
    2. Gradient[i] + = Cache[offset (i, x0)] * delta0 + cache[offset (i, X1)] * DELTA1;

7. Calculation of weights

The calculation formula is as follows:

[CPP]View PlainCopy
  1. Double Maxneg =-dbl_max, Minpos = Dbl_max;
  2. SVM *SVM = &bundle->svm;
  3. for (unsigned i = 0; i < count; ++i)
  4. {
  5. Double WX = Kernel (svm->weight, data[i], dimision);
  6. if (y[i] = = POS && minpos > WX)
  7. Minpos = WX;
  8. Else if (y[i] = = NEG && Maxneg < WX)
  9. Maxneg = WX;
  10. }
  11. Svm->bias =-(Minpos + Maxneg)/2;

Code Address: http://git.oschina.net/fanwenjie/SVM-iris/

Process analysis of SMO algorithm for support vector machine

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.