Exercise:vectorization
Links to Exercises:exercise:vectorization
Note the point:
The pixel points of the mnist picture have been normalized.
If you re-use the SAMPLEIMAGES.M in Exercise:sparse Autoencoder for normalization,
The visual weights that will result in the training are as follows:
My implementation:
Changing the parameter setting of TRAIN.M and selecting the training sample
Percent STEP0: Here we provide the relevant parameters values that would% allow your sparse autoencoder toGetGood filters; You DoNot need to%Change the parameters below.visiblesize= -* -; %Number of input units hiddensize=196; %Number of hidden units Sparsityparam=0.1; %desired average activation of the hidden units. % (This is denoted by the Greek alphabet Rho, which looks like a lower- Case "P", %inchThe Lecture notes). Lambda= 3e-3; %weight Decay parameter beta=3; %weight of sparsity penalty term%%======================================================================%% STEP1: Implement sampleimages%%After implementing Sampleimages, the Display_network command should% display a random sample of $Patches fromThe DataSet%MNIST images have already been normalizedimages= Loadmnistimages ('Train-images.idx3-ubyte');p atches= Images (:,1:10000); %display_network (Patches (:, Randi (Size (patches,2), $,1)),8);%obtain random parameters Thetatheta= Initializeparameters (hiddensize, visiblesize);
W1 visualization of the training obtained:
"Deeplearning" exercise:vectorization