Machine learning (III.)--Inductive preference

Source: Internet
Author: User
Tags dashed line


Inductive preference :

Inductive preference (inductive bias): The preference of machine learning algorithms for certain types of assumptions in the learning process.

The popular saying is "What model is better" such a problem. ””

The performance of inductive preference in two categories:

As already mentioned in the hypothetical space, there are a number of assumptions that can be left behind in training sample training, such as the last three assumptions left.

1. Good spouse = body size any + rich + sex arbitrary

2. Good spouse = size arbitrary + wealth arbitrary + heterosexual

3. Good spouse = body size arbitrary + rich + heterosexual

(Want to spit the groove I last a definition of

"As general as possible" we cannot use training samples in these three to get rid of two of the body, but our algorithm must give a choice, when the preference for a certain type of hypothesis is called inductive preference.

In some ways, the weights of these "assumptions" are not necessarily equivalent, which we need to consider carefully in the algorithm.

In the regression study is more obvious, the point in the figure is a training sample, the curve through these points are theoretically satisfied with the "condition" of the hypothetical model, but we intuitively prefer the smooth blue dashed line instead of red. If the algorithm is an inductive preference, we think that the smoothed Blue line is the "right" model that we recognize.

-------------------------Selection Method-------------

Ames Razor (Occam ' Srazor): If there are multiple assumptions and observations consistent, choose the simplest one

10,000 of them have 10,000 of them, and 233, and the word is simple.

Sometimes the arcane razor is not applicable, such as the above 1 and 2 can not determine which is more "simple", need to use other mechanisms to solve the problem.

However, the truth may not be absolute, the principle of the Austrian shaver is only a good way to deal with the principles, but also do not rule out the test sample more consistent with the situation of the red curve, a matter of opinion, Marxism-Leninism Lenin thought well, specific problems specific analysis, 233.

Although not willing to admit, but essentially red and blue curve error mathematical expectations are the same//want to curse. 233

(Mathematical proof process p8-9) There is no free lunch theorem (NFL), no matter how smart the Learning Algorithm 1, the algorithm 2 is more intelligent, both of the expected performance is the same.

But the NFL has a premise: all "problems" have the same chance (and are more disconnected from specific issues)


Finally, I wish you a happy study ~

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.