多元線性迴歸其實方法和單變數線性迴歸差不多,我們這裡直接給出演算法:
computeCostMulti函數
function J = computeCostMulti(X, y, theta)m = length(y); % number of training examplesJ = 0;predictions = X * theta;J = 1/(2*m)*(predictions - y)' * (predictions - y);end
gradientDescentMulti函數
function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)m = length(y); % number of training examplesJ_history = zeros(num_iters, 1);feature_number = size(X,2);temp = zeros(feature_number,1);for iter = 1:num_itersfor i=1:feature_numbertemp(i) = theta(i) - (alpha / m) * sum((X * theta - y).* X(:,i));endfor j=1:feature_numbertheta(j) = temp(j);end J_history(iter) = computeCostMulti(X, y, theta);endend
但是其中還是有一些區別的,比如在開始梯度下降之前需要進行feature Scaling:
function [X_norm, mu, sigma] = featureNormalize(X)X_norm = X;mu = zeros(1, size(X, 2));sigma = zeros(1, size(X, 2));mu = mean(X);sigma = std(X);for i=1:size(mu,2)X_norm(:,i) = (X(:,i).-mu(i))./sigma(i);endend
Normal Equation演算法的實現
function [theta] = normalEqn(X, y)theta = zeros(size(X, 2), 1);theta = pinv(X'*X)*X'*y;end