Must do:
[*] warmupexercise.m-simple example function in Octave/matlab
[*] plotdata.m-function to display the dataset
[*] computecost.m-function to compute the cost of linear regression
[*] gradientdescent.m-function to run gradient descent
1.warmupexercise.m
A = Eye (5);
2.plotdata.m
' Rx ' ' markersize ' Ten % Plot The Dataylabel ('Profit in $10,000s'% Set the y-axis labelXlabel ('Populationof city in 10,000s'); % Set the x-axis label
3.computecost.m
function J =computecost (X, y, theta)%computecost Compute cost to linear regression% J = Computecost (X, y, theta) computes the cost of using theta as th e% parameter for linear regression to fit the data points in X and y% Initialize Some useful valuesm= Length (y);% Number of training examples% you need to returnThe following variables correctly J=0;% ====================== YOUR CODE here ======================% instructions:compute the cost of a particular choice of theta% you should setJ to the cost . H = x*theta-y; J = (1/(2*M)) *sum (h.*h);% =========================================================================End
Formula:
Note the use of MATLAB. *.
4.gradientdescent.m
function [Theta, j_history] =gradientdescent (X, y, theta, Alpha, num_iters)%gradientdescent performs gradient descent to learn theta% theta = gradientdescent (X, y, theta, Alpha, num_iters) up Dates theta by% taking num_iters gradient steps with learning rate alpha% Initialize Some useful valuesm= Length (y);%Number of training examplesj_history= Zeros (Num_iters,1); forITER =1: Num_iters% ====================== YOUR CODE here ======================% instructions:perform a single gradient step on th e parameter vector% theta. % Hint:while debugging, it can be useful to print out the values% of the cost function (Computecost) and G Radient here. H = x*theta-y;
Theta (1) =theta (1)-alpha* (1/m) *sum (H.*x (:, 1));
Theta (2) =theta (2)-alpha* (1/m) *sum (H.*x (:, 2)); % ============================================================% Save the cost J in every Iteration j_history (ITER)=computecost (X, y, theta); EndEnd
Single-Variable gradient descent
Deviation of function J (θ)
namely h.*x (:, 1)
The θi is reduced toward the lowest gradient and alpha is the step size.
Theta (i) =theta (i)-alpha* (1/m) *sum (h.*x (:, i));
Coursera Machine Learning second week programming job Linear Regression