multivariable linear regression ( multivariate Linear Regression)
Jobs from Links: http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=DeepLearning&doc=exercises/ex3 /ex3.html
This time, the multivariate linear regression problem, the input eigenvector X is two-dimensional, one dimension represents the room area, one dimension represents the number of rooms, and the output Y is the price of the house.
This time, try to find your own. Appropriate learning rate and number of iterations
The appropriate learning rate is determined by looking at the descent curve of the loss function J().
Where the loss function is calculated like this:
the appropriate drop curve for loss function is this:
When the online method iterates, it converges to the true value about 200 times.
The calculation formula of the offline method is:
The parameters of the regression model can be calculated very precisely.
The author has repeatedly reminded that when the online method is calculated, it is necessary to normalized the data of each dimension, which greatly facilitates the convergence of parameters and can quickly converge to the near truth.
While the off-line algorithm is accurate to solve the linear equations, no data preprocessing is needed, only the feature vector x is needed to expand a intercept term.
But-------off-line algorithm needs to solve the inverse of the matrix, when the amount of data is large, this method is not suitable.
CLC Clear All;close all;x= Load ('Ex3x.dat');%Load Data y= Load ('Ex3y.dat');%%%%--------------------Data preprocessing----------------------%%%%%%m=length (y); x= [Ones (M,1), X];theta2= (x'*x) ^ ( -1) *x'*y;%%%%%%%%%%%%--offline algorithm Calculation---%%%%%%%%%%Sigma= STD (x);%Take Variance mu= mean (x);%the mean value x (:,2) = (X (:,2)-Mu (2)./Sigma (2);%Normalization of X (:,3) = (X (:,3)-Mu (3)./Sigma (3);%Normalization of Theta= Zeros (Size (x (1,:)))';% Initialize fitting parametersAlpha =0.08; Percent Your initial learning rate percentJ= Zeros ( -,1); %Initialize loss function forNum_iterations =1: -J (num_iterations)=1/2/m*sum (x*theta-y) ^2;%Batch gradient descent theta= Theta-alpha./m.*x'* (x*theta-y); percent parameter updateEnd%Now plot J% Technically, the first J starts at the zero-ETH Iteration% but Matlab/octave doesn'T has a zero indexFigure;plot (0: theJ1: -),'-') Xlabel ('Number of iterations') Ylabel ('Cost J') X_test=[1,1650,3];%test Sample Y2=x_test*theta2%------offline algorithm results test x_test (2) = (X_test (2)-Mu (2)./Sigma (2);%Normalization of X_test (3) = (X_test (3)-Mu (3)./Sigma (3);%Normalization of Y1=x_test*theta%-------on-line iterative algorithm results test
Deep learning exercises multivariable linear regression