First, Logistic regression
In the linear regression of machine learning, we can use the gradient descent method to get a mapping function hθ (x) H_\theta (x) to come and go close to the sample point, this function is a prediction of the continuous value.
While logistic regression is an algorithm to solve the classification problem, we can get a mapping function F:x→y f:x→y by this algorithm, where x x is the eigenvector, x={x0,x1,x2,..., xn} X = \{x_0,x_1,x_2,..., x_n \}, y Y is the predicted Results. Here in logistic regression, the label y y is a discrete value. ii. boundary of Judgement
When a sample of a training set is plotted in a diagram with its various characteristics as an axis, it is often possible to find a decision boundary to classify the sample points. For example:
linear determinant Boundary :
non-linear determinant boundary :
In the diagram, there are two types of marker types for a sample, one for a positive sample and another for a negative sample, and the characteristics of the sample x0 X_0 and X1 x_1 as axes. Based on the characteristic values of the sample, the sample can be plotted on the graph.
In the diagram, you can find a decision boundary to divide a sample of different labels. Based on this decision boundary, we can know which samples are positive samples and which samples are negative samples.
So we can learn to get an equation eθ (x) =0 E_\theta (x) = zero for the decision boundary, which is the point set that determines the boundary to eθ (x) =0 E_\theta (x) = 0. (Can be regarded as a superb plane )
eθ (x) =xtθe_\theta (x) = X^{t}\theta
where θ={θ0,θ1,θ2,..., partθn} \theta = \{\theta_0,\theta_1,\theta_2,..., \theta_n\}, to preserve the constant entry in eθ (x) =0 E_\theta (x) = 0, make the eigenvector x={1 , X1,X2,