Stanford UFLDL Tutorial Exercise:convolution and Pooling

Source: Internet
Author: User
Tags zip
Exercise:convolution and Pooling
Contents [Hide] 1Convolution and Pooling 1.1Dependencies 1.2Step 1:load learned features 1.3Step 2:implement and test convolution and pooling 1.3.1Step 2a:implement convolution 1.3.2Step 2b:check your convolution 1.3.3Step 2c:pooling 1.3.4Step 2d: Check your pooling 1.4Step 3:convolve and pool with the dataset 1.5Step 4:use pooled features for classification 1.6Step 5:test classifier
convolution and Pooling

In this exercise, you'll use the features-learned on 8x8 patches sampled from images from the STL-10 dataset inthe ea Rlier exercise on linear decoders for classifying images from a reduced STL-10 dataset applyingconvolution and pooling. The reduced STL-10 dataset comprises 64x64 images from 4 classes (airplane, car, cat, dog).

The file Cnn_exercise.zip we have provided some starter code. You should write your code at the places indicated ' your code here ' in the files.

For the exercise, you'll need to modify cnnconvolve.m andcnnpool.m. Dependencies

The following additional files is required for this exercise:a subset of the STL10 Dataset (stlsubset.zip) Starter Code (Cnn_exercise.zip)

You'll also need:sparseautoencoderlinear.m or your saved features from exercise:learning color features with Sparse Aut Oencoders FEEDFORWARDAUTOENCODER.M (and related functions) from Exercise:self-taught learning softmaxtrain.m (and Related functions) from Exercise:softmax Regression

If you had not completed the exercises listed above, we strongly suggest your complete them first. Step 1:load learned features

In this step, you'll use the features from exercise:learning color features with Sparse autoencoders. If you have completed this exercise, you can load the color features that were previously saved. To verify the features is good, the visualized features should look like the following:

Step 2:implement and test convolution and pooling

In this step, you'll implement convolution and pooling, and test them on a small part of the data set to ensure There are implemented these and functions correctly. In the next step, you'll actually convolve and pool the features with the STL-10 images. Step 2a:implement convolution

Implement convolution, as described in feature extraction using convolution, in the function Cnnconvolve INCNNCONVOLVE.M. Implementing convolution is somewhat involved and so we'll guide you through the process below.

First, we want to computeσ (Wx (r,c) +b) for all valid (R,C) (valid meaning, the entire 8x8 patch is contained Withi n the image; This was as opposed to afull convolution, which allows the patch to extend outside the image, with the area outside the IMA GE assumed to be 0), Wherew and B is the learned weights and biases from the input layer to the hidden layer, andx (r,c) I s the 8x8 patch with the upper left corner at (r,c). To accomplish this, one naive method are to loop through all such patches and computeσ (Wx (r,c) + B) for each of the them; While the is fine in theory, it can very slow. Hence, we usually use Matlab's built in convolution functions, which is well optimized.

Observe that the convolution above can is broken down into the following three small steps. First, COMPUTEWX (R,C) for All (R,c). Next, add B to all the computed values. Finally, apply the sigmoid function to the resulting values. This doesn ' t seem-buy you anything, since the first step still requires a loop. However, you can replace the loop in the first step with one of MATLAB ' s optimized convolution functions,conv2, speeding u P the process significantly.

However, there is, important points to note in using CONV2.

First, Conv2 performs a 2-d convolution, but you have 5 "dimensions"-image number, feature number, row of image, column of image, and (color) channel of Image-that want to convolve. Because of this, you'll have to convolve each feature and image channel separately for each image, using the row and Col Umn of the image as the 2 dimensions you convolve. This means so you'll need three outer loops over the image numberimagenum, feature number Featurenum, and the channel Number of the Imagechannel. Inside the three nested for-loops, you'll perform a conv2 2-d convolution, using the weight matrix for thefeaturenum-th Feature and Channel-th channel, and the image matrix for theimagenum-th image.

Second, because of the mathematical definition of convolution, the feature matrix must be "flipped" before passing it Toco Nv2. The following implementation tip explains the "flipping" of feature matrices when using MATLAB ' s convolution functions:

Implementation tip: Using Conv2 and CONVN

Because the mathematical definition of convolution involves "flipping" the Matrix to convolve with (reversing its rows and Its columns), to use MATLAB's convolution functions, you must first "flip" of the weight matrix so, then Matlab "Flips" It according to the mathematical definition the entries would be at the correct place. For example, suppose-wanted to convolve, Matricesimage (a large image) and W (the feature) using Conv2 (image, W), a nd W is a 3x3 matrix as below:

If You use Conv2 (image, W), MATLAB would first "Flip" W, reversing its rows and columns, before convolvingw with image, as Below

If the original layout of W is correct, after flipping, it would is incorrect. For the layout of the correct after flipping, you'll have a to FLIPW before passing it into the conv2, so-after MATLAB Fli PS W in Conv2, the layout would be correct. For conv2, this means reversing the rows and columns, which can is done withflipud and FLIPLR, as shown below:

% Flip W for use in conv2
W = Flipud (FLIPLR (W));

Next, to each of the convolvedfeatures, you should then add B, the corresponding bias for thefeaturenum-th feature.

However, there is one additional complication. If we had not do any preprocessing of the input patches, you could just follow the procedure as described above, and app Ly the sigmoid function to obtain the convolved features, and we ' d is done. However, because you preprocessed the patches before learning features on them, you must also apply the same preprocessing Steps to the convolved patches to get the correct feature activations.

In particular, do the following to the patches:subtract the mean patch, meanpatch to zero the mean of the patches ZC A whiten using the whitening matrix Zcawhite.

These same three steps must also is applied to the input image patches.

Taking the preprocessing steps into account, the feature activations so you should compute are, Wheret is the whitening m Atrix and is the mean patch. Expanding this, obtain, which suggests so you should convolve the images WITHWT rather than W as earlier, and you sh Ould Add, rather than justb to Convolvedfeatures, before finally applying the sigmoid function. Step 2b:check your convolution

We have a provided some code for your to check and the convolution correctly. The code randomly checks the convolved values for a number of (feature, row, column) tuples by computing the feature Activ Ations Usingfeedforwardautoencoder for the selected features and patches directly using the sparse autoencoder. Step 2c:pooling

Implement pooling in the function Cnnpool in CNNPOOL.M. You should implementmean pooling (i.e., averaging through feature responses) for the this part. Step 2d:check your pooling

We have a provided some code for your to check and the pooling correctly. The code runscnnpool against a test matrix to see if it produces the expected result. Step 3:convolve and pool with the dataset

In this step, you'll convolve each of the features your learned with the full 64x64 images from the STL-10 datasets to OBT Ain the convolved features for both the training and test sets. You'll then pool the convolved features to obtain the pooled features for both training and test sets. The pooled features for the training set would be a used to train your classifier, which you can then test on the test set.

Because the convolved features matrix is very large, the code provided does the convolution and pooling-features at a T IME to avoid running out of memory. Step 4:use pooled features for classification

In this step, you'll use the pooled features to train a softmax classifier to map the pooled features to the class label S. The code in this section usessoftmaxtrain from the Softmax exercise to train a softmax classifier on the pooled feature S for $ iterations, which should take around a few minutes. Step 5:test Classifier

Now there is a trained softmax classifier, you can see how well it performs on the test set. These pooled features for the test set would be run through the Softmax classifier, and the accuracy of the predictions wil L be computed. You should expect to get an accuracy of around 80%.

From:http://ufldl.stanford.edu/wiki/index.php/exercise:convolution_and_pooling

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.