TX2 Installation Caffe Summary

Source: Internet
Author: User
Tags file copy git clone

Helpless notebook performance is too slag, dual system switch too troublesome, simply take tx2 to when the second computer, need to run on Linux demo are put on the TX2 run;
First install Caffe (I have repainted two times O ("﹏") o).
To configure the dependencies first
sudo apt-get install Libprotobuf-dev libleveldb-dev libsnappy-dev Libhdf5-serial-dev
sudo apt-get install–no-install-recommends Libboost-all-dev
(See other people's blog to install Libopencv-dev, but I installed a OPENCV version of the problem, so did not install, in the use of OpenCV Python library does not have any problems, so for the time being)
sudo apt-get install–no-install-recommends Libboost-all-dev
Then there's the Python-related installation:
sudo apt-get install Python-dev
sudo apt-get install Python-numpy
sudo apt-get install Ipython
sudo apt-get install Ipython-notebook
sudo apt-get install Python-sklearn
sudo apt-get install Python-skimage
sudo apt-get install Python-protobuf
And, of course, to install PIP.
sudo apt-get install Python-pip
Then sudo pip install–upgrade pip
However, when you install the Python package with Pip, there is a very slow phenomenon, install a package less four or five minutes, more then more than 10 minutes, may be TX2 CPU core is A57, and X86 ratio or there is a big gap, resulting in slow compile (see top guess), so we recommend to use Apt-get installation
Then Google Glog and GFlags and Lmdb dependencies
sudo apt-get install Libgflags-dev libgoogle-glog-dev Liblmdb-dev
Then install git and download the code
sudo apt-get install git
git clone https://github.com/BVLC/caffe.git (github speed really slow, a few K per second)
Access to the source code
CD Caffe
CP Makefile.config.example Makefile.config
Modify Makefile and Makefile.config
Vim Makefile.config
Remove USE_CUDNN: = 1 before the annotation, and then remove
Minus 89 lines of with_python_layer: = 1 comments before

Cuda_arch: =-gencode arch=compute_20,code=sm_20 \
        -gencode arch=compute_25,code=sm_25 \
        -gencode arch= compute_30,code=sm_30 \
        -gencode arch=compute_35,code=sm_35 \
        -gencode arch=compute_50,code=sm_50 \
        - Gencode arch=compute_52,code=sm_52 \
        -gencode arch=compute_60,code=sm_60 \
        -gencode ARCH=COMPUTE_61,CODE=SM _61 \
        -gencode arch=compute_61,code=compute_61

In order to match the CUDA8 calculation, we can remove
-gencode arch=compute_20,code=sm_20 \
-gencode arch=compute_25,code=sm_25 \
In about 92 lines will be

Include_dirs: = $ (python_include)/usr/local/include

To

Include_dirs: = $ (python_include)/usr/local/include/usr/include/hdf5/serial/

And then change Makefile.config.
will be 175 lines around the

Libraries = Glog gflags protobuf boost_system boost_filesystem m HDF5_HL hdf5

To

Libraries = Glog gflags protobuf boost_system boost_filesystem m HDF5_SERIAL_HL hdf5_serial

And then
sudo make clean
sudo make-j8

(Here we call Daniel's results, and Daniel's blog: http://blog.csdn.net/jiongnima/article/details/70040262)
Here Caffe installation has been completed, to use the Caffe with the Minst demo to experience:
CD Caffe;
sudo sh data/minst/get_mnist.sh
Download the Mnist data set
And then
sudo sh example/mnist/create_mnist.sh
sudo sh example/mnist/train_lenet.sh

You can see Caffe identifying the TX2 GPU and calling TX2 's GPU running demo

From the results to see Caffe Mnist sample with two-layer convolution, run the training for more than 2 minutes, and the accuracy of 99%, (faster than the MATLAB run much; Although the internet said Gpu+cudnn as long as 45 seconds, but here to two minute, may be TX2 after all, belong to mobile version, or the GTX series or professional graphics card) (note that Caffe's example sample file is relative path relative to the Caffe folder, so the CD to caffe file with the operation of SH)
This way, I'm going to say that I've stepped two pits:
1: In the running, such as GitHub on the RCNN, compile GitHub will appear makefile 563 error, that is because GitHub CAFFE-RCNN package CUDNN version of the reason, the solution is to be downloaded Caffe src/layers/, CUDNN files in the src/util/,include/caffe/layers/,include/caffe/util/file copy cudnn files that replace the corresponding locations of caffe-rcnn files
2: A similar demo.py occurs when running a rcnn-like

Importerror:no module named Gpu_nms

At that time because the files under the NMS folder are compiled for the desktop platform's GPU and CPU, you need to delete and then go to the Lib directory
to execute the make command, and the corresponding Python package may not be installed at compile time, as long as the PIP or apt-get is OK
Then you can run the demo.py again.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.