According to Dr. Hangyuan Li's summary of statistical learning three-factor method = model + strategy + algorithm, corresponding to logistic regression
MODEL = conditional probability model based on unipolar function (logical function)
Strategy = maximum of prior probability of training samples corresponding to experience loss
Algorithm = Random gradient rise method
The logistic regression MATLAB code is relatively simple, as shown below, looping over all samples, making gradient ascent algorithm
<span style= "FONT-SIZE:18PX;" >function [W]=logisticregression (x,y,learningrate,maxepoch)% Logistic regression% x, y row is a sample, Y value {0,1}% random gradient method [M,n ]=size (x); X=[ones (m,1) X];w=zeros (1,n+1); for Epoch=1:maxepoch for samlendex=1:m w=w+learningrate* (Y ( Samlendex) -1/(1+exp (-X (Samlendex,:) *w ')) *x (Samlendex,:); Endend</span>
In order to compare with the previous perceptron, the perceptual machine code http://blog.csdn.net/zhangzhengyi03539/article/details/46565739
Put two functions under a folder using the following test code
clear;clc;x=[3,3;4,3;1,1;]; Y=[1,1,-1]; Pindex=find (y>0); Tindex=1:length (y); Nindex=setdiff (Tindex,pindex);p lot (x (pindex,1), X (pindex,2), ' * '), hold On;plot (x (nindex,1), X (nindex,2), ' P '); [W]=perceptionlearn (x,y,1,20); Xline=linspace ( -2,5,20); Yline=-w (2)/w (3). *xline-w (1)/w (3);p lot (xline,yline, ' R '); y=[1,1,0]; [W]=logisticregression (x,y,0.1,2000); Xline=linspace ( -2,5,20); Yline=-w (2)/w (3). *xline-w (1)/w (3);p lot (Xline, Yline, ' G '); Legend (' Positive sample ', ' Negative sample ', ' Perceptual machine ', ' logistic regression ')
you can get results
In general, the results of the logistic regression are better than those of the perceptual machine, as this can be seen in the comparison of strategies. Perceptual machine is the strategy is to let the divided sample and the classification of the super-plane distance closer, if all the classification is correct to stop. The logistic regression strategy is the most priori probability for the training sample in the hypothetical sampling probability model based on the unipolar function.
As to why a unipolar function is used, it will be deduced in a subsequent post on the generalized linear model. This function is derived, not the function shape to meet the requirements of the < ^_^ >
Welcome to continue your attention.
Logistic regression algorithm (MATLAB)