Liner Regression linear regression and Python code

Source: Internet
Author: User

Linear regression is the most typical regression problem, and its target value has a linear relationship with all the features . Linear regression is similar to logistic regression, where logistic regression is based on linear regression, which maps the value of linear regression from the real field to 0-1, and by setting the threshold, it realizes the 0-1 classification of regression, that is, two classification.

Linear regression function $y=xw$, where Y is a 1*n-dimensional vector, X is a n*m matrix, and W is the coefficient matrix of m*1. The linear regression uses the square loss function, as for why the square loss function, is because the square loss function uses the least squares idea, its residual error satisfies the normal distribution maximum likelihood estimate , the detail may Baidu.

Linear regression loss function: ${{l}_{w}}=\sum\limits_{i=1}^{n}{{{\left ({{y}_{i}}-x_{i}w \right)}^{2}}}$, where $x_i$ is a 1*m-dimensional vector and W is a m*1-dimensional vector.

There are many methods of linear regression, such as direct least squares, gradient descent and Newton method.

1. Least Squares

The direct least square method is to obtain the matrix solution of the coefficient vector w directly by using matrix transformation, and the process is as follows.

The predictive function is: $Y =xw$, whose loss function is expressed as: ${{y-xw}^{t}} (Y-XW) $

The derivation of W can be: $\frac{d}{dw}{{y-xw}^{t}} (Y-XW) ={{x}^{t}} (Y-XW) $, whose derivation process requires knowledge of the derivation of the matrix.

The other derivative is 0, get: $W ={{\left ({{x}^{t}}x \right)}^{-1}}{{x}^{t}}y$

2. Gradient Descent method

Linear regression loss function: ${{l}_{w}}=\sum\limits_{i=1}^{n}{{{\left ({{y}_{i}}-x_{i}w \right)}^{2}}}$, requires the minimum value of the loss function, and the parameter w is biased to be:

\[\frac{\partial {{\text{l}}_{w}}}{\partial w}=\sum\limits_{i=1}^{n}{\left ({{y}_{i}}-{{x}_{i}}w \right)}\cdot {{X}} _{i}}\]

It is known from the above that the gradient of the parameter w is similar to the logistic regression, where the residual value of each sample is multiplied by the corresponding value of the sample, and then the gradient of the $j$ parameter is performed for all sample $j$ features.

Linear regression least squares and gradient descent methods Python code is as follows:

#-*-coding:utf-8-*-"""Created on Fri Jan 13:29:14 2018@author:zhang"""ImportNumPy as NP fromSklearn.datasetsImportLoad_bostonImportMatplotlib.pyplot as Plt fromSklearn.cross_validationImportTrain_test_split fromSklearnImportpreprocessing"""Multivariate linear regression requires standardization of each variable, because the difference between the calculated value of each sample and its label value is multiplied by the value of the J attribute corresponding to each sample in the WJ gradient, and then summed therefore, if the difference between the attribute values is too large, the coefficients will not converge"""#solving weight coefficients directly by least squares methoddefLeast_square (train_x, train_y):"""Input: Training data (sample * attributes) and tags"""Weights= (TRAIN_X.T * train_x). I * train_x.t *train_yreturnWeights#Gradient descent algorithmdefgradient_descent (train_x, train_y, Maxcycle, Alpha): NumSamples, Numfeatures=Np.shape (train_x) Weights= Np.zeros ((numfeatures,1))           forIinchRange (maxcycle): H= train_x *Weights Err= h-train_y Weights= Weights-(Alpha * err.) S Mtrain_x). TreturnWeightsdefstochastic_gradient_descent (train_x, train_y, Maxcycle, Alpha): NumSamples, Numfeatures=Np.shape (train_x) Weights= Np.zeros ((numfeatures,1))           forIinchRange (maxcycle): forJinchRange (numsamples): H= Train_x[j,:] *Weights Err= h-train_y[j,0] Weights= Weights-(Alpha * err.) Ttrain_x[j,:]). TreturnWeightsdefload_data (): Boston=Load_boston () data=boston.data Label=Boston.targetreturndata, Labeldefshow_results (predict_y, test_y): Plt.scatter (Np.array (test_y), Np.array (predict_y), marker='x', s=, c='Red')#the data for the drawing needs to be an array and not a matrixPlt.plot (Np.arange (0,50), Np.arange (0,50)) Plt.xlabel ("Original_label") Plt.ylabel ("Predict_label") Plt.title ("linerregression") plt.show ()if __name__=="__main__": Data, label=load_data () data=preprocessing.normalize (data. T). T train_x, test_x, train_y, test_y= Train_test_split (data, label, Train_size = 0.75, random_state = 33) train_x=Np.mat (train_x) test_x=Np.mat (test_x) train_y= Np.mat (train_y). T#(3,) to the matrix into a line vector, need to transposeTest_y =Np.mat (test_y). T#weights = Least_square (train_x, train_y)#predict_y = test_x * Weights#show_results (predict_y, test_y)#weights = Gradient_descent (train_x, train_y, 1000, 0.01) predict_y= test_x *weights Show_results (predict_y, test_y)#weights = stochastic_gradient_descent (train_x, train_y, 0.01)#predict_y = test_x * Weights#show_results (predict_y, test_y)

Liner Regression linear regression and Python code

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.