Can machine learning really work? (2) (take the two-dimensional PLA algorithm as an example)

Source: Internet
Author: User

One problem: In most cases, M (the size of the hypothesis set) is infinitely large, such as the PLA algorithm. So is our principle 1 impossible to use?

We try to do something about it:

STEP1: Look for the effective number of hypothesis set to replace M

What do you mean? This is the derivation, but, for example, in the PLA algorithm, H1 and H2 are so similar (considering the straight line on the plane), so if D is good for H1, then H2 is also good. That is: There are too many overlapping parts, we over-estimatinng.

Now let's change the idea. Look at the problem from the dataset's point of view.

What do you mean? For only one point of D, all H is divided into two categories: one is to divide X1 into one, and the other is to divide into X.

By now, we have understood this way of thinking. For each h, we look at the classification of each of the data in D. It's called a dichotomy. Then the effective number of H is the sum of dichotomies.

However, in this case, our calculation process relies on the specific data of D, so we use growth function to remove the dependency on the data: we use | H (x1; x2; : : : ; x N) | The upper limit. Used to indicate:

If growth function is polynomial rather than exponential, then we can use principle 1 to design learning algorithm A.

Step2:effective number is polynomial, not exponential.

As we know above, we just need to prove that effective number is polynomial, not exponential, and we're done. However, we need to introduce several concepts to assist our proving process.

Concept: Break point & Shatter

These are two concepts that are important for solving growth function problems.

There are 2 input points, and the H of PLA can fully achieve four classifications. At this point, the 2 points are said to be shatter.

There are 3 input points, the PLA H can fully achieve 8 classifications, at this point, said the 3 points are shatter.

However, there are no 4 points that can be used by the PLA H shatter.

In this case, 4 is the break point of H.

Set K is the break point of H, then there is the proof process. Normal and ingenious.

Define bounding function:

Then there are:

How to solve the remaining B (n,k)?

Take B (4,3) as an example to see if we can use B (3,?). Solve.

B (4,3) = 11, can be divided into two categories: one is x4 in pairs appear, a class is x4 into a single appearance.

Because k=3, so any 3 points can not shatter, namely: Α+β≤b (3,3).

And because for 2α, X4 is in pairs appear, so, x1,x2,x3 any two points must not shatter, otherwise, plus X4, there will be three points are shatter. namely: Α≤b (3,2).

Can be proved by mathematical inductive method. At this point the right end of the inequality is the upper limit of growth function. (can prove that the above ≤ is actually =)

Can machine learning really work? (2) (take the two-dimensional PLA algorithm as an example)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.