Proof of logistic regression loss function

Source: Internet
Author: User

In understanding the principle of logistic regression algorithm, we point out the definition of the loss function of logistic regression (here we re-contract the symbol):
For a single sample, the desired output of the sample is denoted as Y, and the actual output of the sample is recorded as Y_hat, then the loss function of the logistic regression can be expressed as:

And for the cost function of the whole sample set, it can be expressed as:

Unlike the loss function, it describes the relationship between the parameters W and B of the model and the optimization target on the last episode of the entire sample, in which the cost function is the average of the loss function in these two formulas.

So let's take a look at why it works for a loss function:

If the output Y=1 is expected, then the optimization target is min L (y,y_hat) =min[-log (Y_hat)], obviously at this time the larger the y_hat, the optimization target will get the minimum value;
If the output y=0 is expected, then the optimization target is min L (y,y_hat) =min[-log (1-y_hat)], obviously the smaller the y_hat at this time, the optimization target will get the minimum value;

Here's how this loss function comes in:
The logistic regression model is as follows:

So the probability of Y=1 y_hat for a given x:

Then there are:

Because it is a two classification problem, the value of Y is not 1 or 0, then the combined formula can be obtained:

At the same time, because the log function is a strictly monotonically increasing function, in machine learning, we often do not pay much attention to the base of the log is what, or even directly omitted, so there is a log of the writing, but in mathematics this is wrong. Therefore, to facilitate the subsequent solution, we can take the logarithm:

For the cost function, he optimizes W and b for the entire training set, so there is a formula that appears above:

In fact, the maximum likelihood estimation method can be used to find the solution, but in the actual optimization, we often use the gradient descent method directly.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.