Matconvnet Study Notes

Source: Internet
Author: User
Tags cuda toolkit nvcc

Introduction to Matconvnet

  Matconvnet is a MATLAB toolbox that implements convolutional neural networks (CNN) for computer vision. Since the breakthrough, CNN has had a significant impact in the field of computer vision, especially image understanding, which has largely replaced traditional image representations. There are many other machine learning, deep learning and the existence of the CNN Open Source Library. Some of the most popular: Cudaconvnet, Torch, Theano,caffe and so on. Matconvnet is an environment that is especially friendly and efficient for researchers, including many CNN compute blocks, such as convolution, normalization, and pooling, most of which are written in C + + or cuda, meaning it allows users to write new chunks to improve computational efficiency. Matconvnet can learn alexnet and other large-depth CNN models, the pre-trained versions of these powerful models can be downloaded from the Matconvnet home page. Although powerful, matconvnet is easy to use and install. The implementation is completely independent and requires only MATLAB and a compatible C + + compiler (using GPU code to provide Cuda Devkit and the appropriate NVIDIA GPU for free).

[Note]: I downloaded the version is Matconvnet-1.0-beta19, this can be downloaded from the Matconvnet homepage, the download URL is as follows: http://www.vlfeat.org/matconvnet/

 First, Getting started

Compiling the CPU version of the Matconvnet

Let's start with a simple but complete example of how CNN finishes downloading Matconvnet, compiling, downloading pre-trained CNN models, and completing the MATLAB picture classification process. The code can be obtained from the http://www.vlfeat.org/matconvnet/pretrained/of the Matconvnet home page.

% install and compile matconvnet (needed once) Untar (' http://www.vlfeat.org/matconvnet/download/ Matconvnet-1.0-beta20.tar.gz '); CD matconvnet-1.0-beta20run matlab/vl_compilenn% download a pre-trained CNN from the web  (Needed once) urlwrite (...  ' Http://www.vlfeat.org/matconvnet/models/imagenet-vgg-f.mat ', ...  ' Imagenet-vgg-f.mat ');% setup Matconvnetrun matlab/vl_setupnn% load the pre-trained cnnnet = Load (' Imagenet-vgg-f.mat ') ; net = Vl_simplenn_tidy (net);% load and preprocess an Imageim = Imread (' peppers.png '); im_ = single (IM); % note:0-255 rangeim_ = imresize (im_, Net.meta.normalization.imageSize (1:2)); im_ = Bsxfun (@minus, im_, Net.meta.normali zation.averageimage);% run the Cnnres = Vl_simplenn (NET, im_);% show the classification resultscores = Squeeze (Gather (re S (end). x); [Bestscore, best] = max (scores); figure (1); CLF; Imagesc (IM); Title (sprintf ('%s (%d), score%.3f ',... net.meta.classes.description{best}, Best, Bestscore));

Note: 1, Untar (' http://www.vlfeat.org/matconvnet/download/matconvnet-1.0-beta20.tar.gz ') is the process of downloading the installation package, it is recommended to download its Zip package, unzip and put in any place, run the program will automatically add the path through VL_SETUPNN () to Matlab. It is best to use the browser's built-in downloader when it is downloaded, because it is a TXT file and needs to be converted.

2, run Matlab/vl_compilenn is the process of compiling, the premise is that MATLAB and the compiler (vsc++) to achieve the connection, if there is no use Mex-setup command, set up matlab C + + compiler, prompted MEX success Before you can run the example in example. This is actually the process of configuring Matconvnet , only two sentences are needed:Mex-Setup;run Matlab/vl_compilenn

3, run MATLAB/VL_SETUPNN, this sentence in the run always error, prompt the wrong use of CD (of course, the last sentence may also appear this problem, but I am directly running the Vl_compilenn, so did not appear, hehe), Here I change this sentence to run (FullFile (Fileparts (mfilename (' FullPath ')),...

' ... ', ' matlab ', ' vl_setupnn.m '), of course, the specific statement is related to the path you set, there is no error.

4.net = Load (' Imagenet-vgg-f.mat ') here net is the pre-trained model required for this tool library, where the chain network is already architected, and its presentation is a struct, consisting of two parts, Layers (because this structure has 21 layers, it contains 21 cells) and meta (contains 2 structures, categories and standardized information).

5, the main code of the program is Vl_simplenn, including the input and output of the CNN network and the process of calling functions.

Compiling the GPU version of the Matconvnet

Under the GPU condition compiles, first your video card must be invida, and needs compute compability>2.0, next must consider the version coordination question, I use the version is Window7 65bits,vs2013, CUDA7.5,MATLAB2014A, the graphics card is Gtx960,compute compability=5.2, about the graphics card is compliant, you can also download the software GPU Caps viewer view.

The steps for compiling the GPU version of Matconvnet are as follows:

(1) Download CUDA 7.5.18, and Cuda_quick_start_guide.pdf,cuda Toolkit 7.5.18:http://developer.download.nvidia.com/compute/ Cuda/7.5/prod/local_installers/cuda_7.5.18_windows.exe

(2) Direct decompression installation, it is recommended to use the default installation method, convenient matconvnet by default to find Cuda compiler ' NVCC '. For the specific configuration of Cuda and VS, refer to http://blog.csdn.net/listening5/article/details/50240147 and http://www.cnblogs.com/ Shengshengwang/p/5139245.html

(3) After completion, open the Cuda Samples folder under Samples_vs2013.sln respectively in Debug and release X64 under the full compilation. The compilation process, such as prompt can not find "d3dx9.h", "D3dx10.h", "D3dx11.h" header file, Baidu download Dxsdk_jun10.exe and install. Download the URL http://www.microsoft.com/en-us/download/details.aspx?id=6812 and recompile.

(4) After all successful compilation, open the Cuda Samples folder under the Bin/win64/release, such as. Run one of the small programs to view the GPU Cuda information. Pass is passed.

  

(5) Install cudnn-win64-v4.0/or-v3.0, download URL http://download.csdn.net/download/yfszzx/9307683 directly extracted to a folder, will Cudnn64_4.dll Copy the file to the./matconvnet/matlab/mex folder.

(6) Compile Vl_compilenn program, pay attention to change some information according to the actual situation, approximate call way is Vl_compilenn (' Enablegpu ',true,' Cudamethod ',' NVCC ',' enablecudnn ',' true ',' cudnnroot ',' Local/cuda ', prompted Mex to succeed, then prove the work is done more than half.

(7) The last is to run the CNN_CIFA.M file, before running the program Opts.gpudevice =[] to Opts.gpudevice =[1], to run with GPU graphics

The visible speed is quite fast!

Let's take a look at some of the computational functions in this tool library to make it easier for you to understand.

Conputationnal Blocks: Implementing a CNN Compute block
I. Convolution

Y = Vl_nnconv (x, F, B) calculates the convolution of the image heap X, F is the convolution core, and B is biased. X=h*w*d*n, (h,w) is the height and width of the image, D is the image depth (number of feature channels, example color is 3), N is the number of images in the heap. F=fw*fh*fd*k, (FH,FW) is the size of the convolution nucleus, FD is the depth of the convolution nucleus, must be consistent with D, or divisible d,k is the number of convolution nuclei. For an image, the formula for convolution is:

Where IJ represents the height and width of the image, and D "represents the number of convolution cores, thus corresponding to the D" output.
[Dzdx, DZDF, dzdb] = Vl_nnconv (X, F, B, Dzdy) calculates the derivative of the block mapped to the dzdy. This is the gradient calculation formula applied in the reverse propagation.
In addition, there are some specific variable settings. including stride= (SH,SW) is the step, that is, the size of each move during the convolution process, which also determines the size of the final output, pad is the size of 0, expressed as:


The size of the final output is:

[Note]: 1, in the matconvnet does not distinguish between the full connection layer and the convolution layer, but considers the former is a special case of the latter.

2. In Matconvnet, there is the concept of filter groups (i.e. filter group), which means that Vl_nnconv allows the channel of input x to be grouped, and each group applies a different subset of filters. Groups=d/d ', D is the image depth, d ' is the depth of the filter, so the first group can include the input 1, 2,,,d ' dimension, the second group includes the input D ' +1,,,2d ', and so on, but the output size is constant.

Second, convolution conversion (anti-convolution)

Y = Vl_nnconvt (X, F, B) calculates the convolution conversion of the CNN, which is the inverse of the convolution, with its input and output in the same form. Because the convolution supports the next sampling of the input complement 0 output, the deconvolution supports the input on-sample output clipping.

Third, the spatial pooling of

y = Vl_nnpool (x, pool) or y = Vl_nnpool (x, [Pooly, Poolx]) pools each channel of input x in a way that can be either the maximum or average of patches. Same as convolution, pooling also supports pad and stride operation, but pad is sometimes complementary to infinity.

Iv. activation function

Relu function: y = Vl_nnrelu (X,dzdy,varargin), at leak=0, the expression is

sigmoid function: Out = vl_nnsigmoid (x,dzdy)

In this case, only the function expression of forward propagation is given, and the specific expression of the reverse propagation (involving dzdy) can be seen in the program.

V. Normalization

1. VL_NNNORMALIZE:CNN Local Response Normalization (LRN)

Local Response Normalization is the normalization of a local input area, which, in terms of expression, is the normalization of the corresponding subset of inputs in each groups (preceding). The expression is as follows; the parameters include param = [N KAPPA ALPHA BETA]

where G (k) is the corresponding subset of input corresponding to the channel K, defined in the program as Q (k) = [Max (1, K-floor ((N-1)/2)), Min (D, K+ceil ((N-1)/2)];

2, Vl_nnbnorm CNN to achieve batch normalization

Y = Vl_nnbnorm (x,g,b), where XY is a 4-dimensional tensor, and the 4th Dimension T represents the size of each batch processed. The normalized expression is


 
3.vl_nnspnorm Realization of spatial normalization
 
y = vl_nnspnorm (x, param, dzdy), param = [PH PW ALPHA BETA]; that is, pooling is done for each channel first, pooling is averaged and then normalized. Its expression is

4. Vl_nnsoftmax CNN Softmax

  Y = Vl_nnsoftmax (x,dzdy): The Softmax function is applied in a groups (formerly), and the Softmax function can be seen as a union of an activation function and a normalized operation

   Vi. Loss and comparison

1, [y1, y2] = vl_nnpdist (x, x0, p, Varargin) calculates the distance between each vector x and the target x0, defined as:

2. Y = Vl_nnloss (x,c,dzdy,varargin)

  

---restore content ends---

Matconvnet Study Notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.