Stanford UFLDL Tutorial Exercise:self-taught Learning

Source: Internet
Author: User
Exercise:self-taught Learning
Contents [Hide] 1Overview 2Dependencies 3Step 1:generate The input and test data sets 4Step 2:train the sparse autoencoder 5Step 3:extracting features 6Step 4:training and testing the logistic regression model 7Step 5:classifying on the test set
Overview

In this exercise, we'll use the self-taught learning paradigm with the sparse autoencoder and Softmax classifier to Buil D a classifier for handwritten digits.

You'll be building upon your code from the earlier exercises. First, you'll train your sparse autoencoder on an "unlabeled" training dataset of handwritten digits. This produces feature is penstroke-like. We then extract these learned features from a labeled dataset of handwritten digits. These features'll then being used as inputs to the Softmax classifier so you wrote in the previous exercise.

Concretely, for each example in the the labeled Training dataset, we forward propagate the example to obtain the Activati On the hidden units. We now represent this example using (the "replacement" representation), and use this to as the new feature representation With which to train the Softmax classifier.

Finally, we also extract the same features from the test data to obtain predictions.

In this exercise, we goal is to distinguish between the digits from 0 to 4. We'll use the digits 5 to 9 as our "unlabeled" datasets which which to learn the features; We'll then use a labeled DataSet with the digits 0 to 4 Withwhich to train the Softmax classifier.

In the starter code, we had provided a file STLEXERCISE.M that would help walk you through the steps on this exer Cise. Dependencies

The following additional files is required for this exercise:mnist datasets support functions for loading MNIST in Matlab Starter Code (Stl_exercise.zip)

You'll also need your code from the following exercises:Exercise:Sparse Autoencoder exercise:vectorization exercise:sof Tmax Regression

If you had not completed the exercises listed above, we strongly suggest your complete them first. Step 1:generate The input and test data sets

Download and Decompress Stl_exercise.zip, which contains starter code for this exercise. Additionally, you'll need to download the datasets from the MNIST handwritten Digit Database for this project. Step 2:train the sparse autoencoder

Next, use the unlabeled data (the digits from 5 to 9) to train a sparse autoencoder, using the SAMESPARSEAUTOENCODERCOST.M function as you had written in the previous exercise. (from the earlier exercise, you should has a working and vectorized implementation of the sparse autoencoder.) For us, the training step took less than and minutes on a fast desktop. When training are complete, you should get a visualization of pen strokes like the image shown below:

Informally, the features learned by the sparse autoencoder should correspond to Penstrokes. Step 3:extracting features

After the sparse autoencoder are trained, you'll use it to extract features from the handwritten digit images.

Complete feedforwardautoencoder.m to produce a matrix whose columns correspond to activations of the hidden layer for each example, i.e, the Vectora (2) corresponding to activation of Layer 2. (Recall that we treat the inputs as layer 1).

After completing this step, calling FEEDFORWARDAUTOENCODER.M should convert the raw image data to hidden unit Activationsa (2). Step 4:training and testing the logistic regression model

Use your code from the Softmax exercise (SOFTMAXTRAIN.M) to train a Softmax classifier using the training set features (TR Ainfeatures) and Labels (trainlabels). Step 5:classifying on the test set

Finally, complete the code for predictions on the test set (testfeatures) and see how your learned features perform! If you've done all the steps correctly, you should get a accuracy of about98% percent.

As a comparison, when raw pixels is used (instead of the learned features), we obtained a test accuracy of only around 96 % (for the same train and test sets).

From:http://ufldl.stanford.edu/wiki/index.php/exercise:self-taught_learning

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.