-Cost function
For linear regression models, the cost function we define is the sum of squares of all model errors. In theory, we can also follow the definition of a logistic regression model, but the problem is that when we bring it into the cost function defined, the cost function we get is a non-convex function (Non-convex functions).
This means that our cost function has many local minimums, which will affect the gradient descent algorithm to find the global minimum value.
Therefore, we redefine the cost function of logistic regression as:
which
The relationship between the and is as follows:
The function is characterized by the fact that when the actual Y=1 and 1 o'clock error is 0, when the Y=1 but not the 1 o'clock error with the smaller and larger, when the actual y=0 and 0 o'clock at the cost of 0, when the y=0 but not 0 for the error with the increase of the h is larger.
The cost function of the build is simplified as follows:
The bring-in cost function gets:
After obtaining such a cost function, we can use the gradient descent algorithm to find the parameters which can make the cost function least.
The algorithm is:
After derivation, get:
Note: Although the resulting gradient descent algorithm appears to be the same as the gradient descent algorithm for linear regression, the hypothetical function here differs from the linear regression, so it is actually different. In addition, it is still necessary to perform feature scaling before applying the gradient descent algorithm.
In addition, there are some alternatives to the gradient descent algorithm:
In addition to the gradient descent algorithm, there are some algorithms that are often used to minimize the cost function, so that the algorithm is more complex and superior, and usually does not require manual selection of the learning rate, usually faster than the gradient descent algorithm. These algorithms are: Conjugate gradient (conjugate Gradient), local optimization method (Broyden Fletcher Goldfarb Shann, BFGS) and finite memory local optimization (LBFGS)
Fminunc is a minimum value optimization function in both MATLAB and octave, and we need to provide the cost function and the derivation of each parameter, and here is an example of code that uses the Fminunc function in octave:
Coursera Machine Learning Study notes (14)