Machine Learning Cornerstone Nineth Lecture: Linear regression

Source: Internet
Author: User

Blog has migrated to Marcovaldo's blog (http://marcovaldong.github.io/)

Machine learning Cornerstone Tenth introduces the linear regression problem (linear regression problem), starting with this lecture to introduce specific machine learning algorithms. Most of the content behind, bloggers have learned, so the notes may be abbreviated. Linear Regression Problem

Introduce linear regression with the issue of credit card issuance, but this time it is no longer a classification, but rather to allow the algorithm to give credit lines based on customer information. The algorithm assigns a weight to each feature and then calculates the weighted value directly to obtain the credit limit.

The training data set is labeled in the coordinate system, as shown in the following figure, each data point is represented by a circle (here, for visualization only, the X is one and two dimensions, and the actual x is high), The linear regression algorithm to do is to find a line or a super plane can be as good as possible (the figure of the red segment length and the smaller the better) fitting training data set.

The error measure used here is the squared error:

This section tests:

Logistic Regression Error

Calculate Ein (W) e_{in} (W) using a matrix:

Ein (W) e_{in} (W) is a continuous, micro-convex function, so now to find a Wlin W_{lin} makes ∇ein (Wlin) =0 \nabla e_{in} (W_{lin}) = 0.

Because the blogger had already learned Andrew Ng's machine learning in front of him, the derivation process was omitted. Summing up, the linear regression algorithm to do the following diagram:

This section tests:

Gradient of LOGISITC Regression Error

Linear regression does not get the last hypothesis as the previous PLA, but more like the method of analysis, the parameter w is obtained directly by the formula. Linear regression is still a machine learning algorithm because it does the following work: Optimization of Ein E_{in}, finding its minimum value so that the Eout≈ein e_{out} \approx e_{in} iteration is done within Pseudo-inverse

Let's talk about the physical meaning of Ein¯¯¯¯¯\overline{e_{in} and Eout¯¯¯¯¯¯\overline{e_{out}}.

The following figure shows the Ein¯¯¯¯¯\overline{e_{in}} and Eout¯¯¯¯¯¯

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.