UFLDL Learning notes and programming Jobs: convolutional neural Network (convolutional neural Networks)

Source: Internet
Author: User


UFLDL Learning notes and programming Jobs: convolutional neural Network (convolutional neural Networks)


UFLDL out a new tutorial, feel better than before, from the basics, the system is clear, but also programming practice.

In deep learning high-quality group inside listen to some predecessors said, do not delve into other machine learning algorithms, you can directly to learn DL.

So recently began to engage in this, the tutorial plus MATLAB programming, is perfect AH.

The address of the new tutorial is: http://ufldl.stanford.edu/tutorial/


This section study address: http://ufldl.stanford.edu/tutorial/supervised/ConvolutionalNeuralNetwork/


has not updated UFLDL's study notes, because before using octave run this code failed, check the code feel no error, and then think about running with Matlab,

But has been delayed, today installed Matlab, sure enough, success.


In fact convolutional neural network is nothing special, convolutional layer connection can be seen as local connection on it.


Here's the main code:

Cnncost.m

function [Cost, Grad, preds] = Cnncost (theta,images,labels,numclasses,... filterdim,numfilt ers,pooldim,pred)% calcualte cost and gradient for a single layer convolutional% neural network followed by a Softmax Laye R with cross entropy% objective.%% parameters:% theta-unrolled parameter vector% ima Ges-stores images in Imagedim x Imagedim x numimges% array% Numclasses-number of classes to pred ict% Filterdim-dimension of convolutional filter% Numfilters-number of convolutional                filters% pooldim-dimension of pooling area% Pred-boolean only forward propagate and return% predictions%%% returns:% Cost-cross Entropy cost% grad-gradient with respect to theta (if pred==fals e)% preds-list of predictions for each example (if pred==true) if ~exist (' pred ', ' var ') pred = FALSE;END;WEIGHTD Ecay = 0.0001;imagedim = Size (images,1); % Height/width of imagenumimages = size (images,3); % number of images%% reshape parameters and setup gradient matrices% Wc is Filterdim x Filterdim x numfilters parameter ma  Trix%convolution parameter% BC is the corresponding bias% Wd are numclasses x hiddensize parameter matrix where hiddensize% is the Number of output units from the convolutional layer% this convolutional layer should be inclusive of convolution layer and pool layer% BD is corresponding BIAS[WC, W D, BC, BD] = cnnparamstostack (theta,imagedim,filterdim,numfilters,... pooldim,numclasses);% same Si Zes as WC,WD,BC,BD used to the hold gradient w.r.t above params. Wc_grad = zeros (Size (Wc)); Wd_grad = zeros (Size (Wd)); Bc_grad = zeros (size (BC)); Bd_grad = zeros (Size (BD));%%==================================== ==================================%% Step 1a:forward propagation% In this step you'll Forward propagate the input thro  Ugh the% convolutional and subsampling (mean pooling) layers. You'll then use% the responses from the convolutionand pooling layer as the input to a%-Softmax layer.%% convolutional layer% for each image and each filter, conv  Olve the image with the filter, add% the bias and apply the sigmoid nonlinearity.  Then subsample the percent convolved activations with mean pooling.  Store the results of the% convolution in activations and the results of the pooling in% activationspooled. You'll need to save the convolved activations for% Backpropagation.convdim = imagedim-filterdim+1; % dimension of convolved Outputoutputdim = (Convdim)/pooldim; % dimension of subsampled output% Convdim x Convdim x numfilters x numimages tensor for storing activationsactivations = Z Eros (convdim,convdim,numfilters,numimages);% Outputdim x Outputdim x numfilters x numimages tensor for storing% subsample D activationsactivationspooled = zeros (outputdim,outputdim,numfilters,numimages);%%% YOUR CODE here%%%% Call two functions written before activations = Cnnconvolve (Filterdim, numfilters, images, Wc, BC); activationspooled = Cnnpool (pooLdim, activations); % reshape activations to 2-d matrix, hiddensize x numimages,% for Softmax layeractivationspooled = reshape (activationspo  oled,[],numimages);% becomes the traditional Softmax model Softmax layer% Forward Propagate the pooled activations calculated above into a% Standard Softmax layer.  For your convenience we had reshaped% activationpooled into a hiddensize x numimages matrix.  Store the% results in probs.% numclasses x numimages for storing probability the each image belongs to% each class.probs = Zeros (numclasses,numimages);%%% YOUR CODE here%%%z = Wd*activationspooled;z = Bsxfun (@plus, z,bd);%z = Wd * Activations Pooled+repmat (Bd,[1,numimages]); z = Bsxfun (@minus, Z,max (z,[],1)),% minus maximum, decrease one dimension z = exp (z);p robs = Bsxfun (@rdivide, Z,sum (z,1));p Reds = probs;%%========= =============================================================%% Step 1b:calculate Cost% In this step you'll use the LA  BELs given as input and the probs% calculate above to evaluate the cross entropy objective. Store your% results in cost.cost = 0;   % save objective into cost%%% YOUR CODE here%%%logprobs = log (probs); Labelindex=sub2ind (Size (logprobs), labels ', 1:size (logprobs,2));% finds the linear index of the matrix logprobs, the row is specified by labels, the column is 1:size (Logprobs,  2) specify, generate a linear index to return to Labelindexvalues = Logprobs (labelindex); Cost =-sum (values); weightdecaycost = (WEIGHTDECAY/2) * (SUM (Wd (:). ^ 2) + sum (Wc (:). ^ 2)); cost = Cost/numimages+weightdecaycost; %make sure to scale your gradients by the inverse size of the training set%if your included this scale in the cost Calcula tion otherwise your code would not pass the numerical gradient check.% makes predictions given probs and returns without BA    Ckproagating errors.if pred [~,preds] = max (probs,[],1);    Preds = Preds ';    Grad = 0;  return;end;%%======================================================================%% STEP 1c:backpropagation%  Backpropagate errors through the Softmax and convolutional/subsampling% layers. Store the errors for the next step to calculate the Gradient.% backpropagating the error w.r.t the Softmax layer is as usual. to% backpropagate through the pooling layer, you'll need to upsample the% error with respect to the pooling layer for  Each filter and each image. % use the KRON function and a matrix of ones to does this upsampling% quickly.%%% YOUR CODE here%%%%softmax residuals Targetmatri  x = zeros (Size (probs));  Targetmatrix (labelindex) = 1; Softmaxerror = Probs-targetmatrix;%pool Layer residuals Poolerror = Wd ' *softmaxerror;poolerror = Reshape (PoolError, OutputDim, Outputdim, Numfilters, numimages); unpoolerror = Zeros (Convdim, Convdim, Numfilters, numimages); unpoolingFilter = ones (        Pooldim);p Oolarea = pooldim*pooldim;% expand Poolerror to Unpoolerrorfor imagenum = 1:numimages for filternum = 1:numFilters        E = Poolerror (:,:, Filternum, Imagenum);    Unpoolerror (:,:, Filternum, imagenum) = Kron (E, unpoolingfilter)./poolarea; Endendconverror = unpoolerror. * Activations. * (1-activations); %%======================================================================%% STEP 1d:gradient calculation% after backpropagating the errors above, we can use them to Calc  Ulate the% gradient with respect to all the parameters.  The gradient w.r.t the% softmax layer is calculated as usual. To calculate the gradient w.r.t.% a filter on the convolutional layer, convolve the backpropagated error% for that Filte R with each image and aggregate through images.%%% YOUR CODE here%%%%softmax gradient Wd_grad = (1/numimages). *softmaxerror * Activa Tionspooled ' +weightdecay * WD; % l+1 Layer residuals * l Layer activation Value Bd_grad = (1/numimages). *sum (Softmaxerror, 2);% Gradient of the convolutional Layerbc_grad = zeros (Size (BC ));    Wc_grad = zeros (Size (Wc));% calculation Bc_gradfor Filternum = 1:numfilters e = Converror (:,:, Filternum,:); Bc_grad (Filternum) = (1/numimages). *sum (E (:)); end% flip Converrorfor filternum = 1:numfilters for imagenum = 1:numimage        S E = Converror (:,:, Filternum, Imagenum);    Converror (:,:, Filternum, imagenum) = Rot90 (E, 2); Endendfor filTernum = 1:numfilters Wc_gradfilter = zeros (Size (Wc_grad, 1), Size (Wc_grad, 2)); For imagenum = 1:numimages Wc_gradfilter = wc_gradfilter + conv2 (Images (:,:, Imagenum), Converror (:,:, fil    Ternum, imagenum), ' valid '); End Wc_grad (:,:, Filternum) = (1/numimages). *wc_gradfilter;endwc_grad = Wc_grad + Weightdecay * wc;%% Unroll gradient into grad vector for minfuncgrad = [Wc_grad (:); Wd_grad (:); Bc_grad (:); Bd_grad (:)];end


Minfuncsgd.m

function [Opttheta] = MINFUNCSGD (funobj,theta,data,labels,... options)% Runs stochastic gradient de Scent with momentum to optimize the% parameters for the given objective.%% parameters:% funobj-function handle WHI CH accepts as input theta,% data, labels and returns cost and gradient w.r.t% to theta.% th eta-unrolled parameter vector% data-stores data in m x N x numexamples tensor% Labels-correspond ing labels in numexamples x 1 vector% options-struct to store specific options for optimization%% returns:% optthe ta-optimized parameter vector%% Options (* required)% epochs*-number of epochs through data% alpha*-in Itial Learning rate% minibatch*-size of minibatch% Momentum-momentum constant, defualts to 0.9%%================ ======================================================%% Setupassert (All (Isfield (options,{' epochs ', ' alpha ', '        Minibatch '}),... ' Some options Not defined '); if ~isfield (options, ' momentum ') options.momentum = 0.9;end;epochs = Options.epochs;alpha = Options.alpha ; minibatch = options.minibatch;m = Length (labels); % training Set size% Setup for momentummom = 0.5;momincrease = 20;velocity = zeros (Size (theta));%%========================  ==============================================%% SGD loopit = 0;for e = 1:epochs% randomly permute indices of data        For quick Minibatch Sampling RP = Randperm (m);        For S=1:minibatch: (m-minibatch+1) it = it + 1;        % increase momentum after momincrease iterations if it = = momincrease mom = options.momentum;        End        % get Next randomly selected Minibatch Mb_data = Data (:,:, RP (S:s+minibatch-1));        Mb_labels = Labels (RP (s:s+minibatch-1));                % evaluate the objective function on the next Minibatch [cost grad] = Funobj (theta,mb_data,mb_labels);        % instructions:add in the weighted velocity vector to the% gradient evaluated above scaled by the learning rate.        % then update the current weights theta according to the percent SGD update rule%%% YOUR CODE here%%%         velocity = Mom*velocity+alpha*grad;        theta = theta-velocity;    fprintf (' Epoch%d:cost on iteration%d was%f\n ', e,it,cost);    End % Aneal learning rate by factor of the each epoch Alpha = Alpha/2.0;end;opttheta = Theta;end


Operation Result:





This article linger

This article link: http://blog.csdn.net/lingerlanlan/article/details/41390443




UFLDL Learning notes and programming Jobs: convolutional neural Network (convolutional neural Networks)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.