STEP1: Common dependencies required for installation
sudo apt-get install Libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev Protobuf-compiler
sudo apt-get install--no-install-recommends Libboost-all-dev
You can see that the following libraries are installed:
1. Google's protobuf, it is actually similar to XML, that is, the information of a certain data structure is saved in some format, mainly for data storage, transmission protocol format and other occasions.
2. Levelldb is also a very efficient single-machine KV database implemented by Google, which is now capable of supporting billion levels of data. has high random write, sequential read/write performance, but random read performance is very general, so it is very suitable for use in a few queries and write a lot of scenes.
3. Snappy is a C + + written to compress and decompress the development package, its goal is not to achieve maximum compression or compatible with other compression formats, but to improve the speed of compression and reasonable compression rate. Snappy is widely used within Google, and Google is highly appreciative of snappy's many advantages, and snappy has been "designed to not crash even if it encounters corrupt or malicious input files," and is used by Google to compress petabytes of data in production environments. Its robustness and stability are evident.
4. OpenCV, this is not much to say.
5. Hdf5, just understand it as a file format. The full name is hierarchical data format, which can store different types of image and digital data in a file format, and can be transferred on different types of machines, as well as a library of functions that handle this file format uniformly.
6. Boost, a C + + library, is a generic term for some C + + libraries that provide extensions for the C + + language standard library.
STEP2: Installing Cuda
Cuda is the nvidia out of a parallel computing architecture platform, mainly to let the current performance of the GPU can also be able to operate, or so good performance is only used to display is not very wasteful. So we now use Caffe to do training, many are directly on the GPU run, because if you are Nvidia's graphics card, then you must install this first, but it seems to first check your computer graphics card is supported. Portal: CUDA GPUs.
Installation is very simple, and we installed this, but also in passing to solve the graphics card driver problem. Linux versions of the system this graphics card driver has been a very headache, a little attention may have to get a reload system.
First download the installation package on Cuda official website, I downloaded: cuda-repo-ubuntu1604.8.0_local_ga2_8.0.61-1_amd64.deb. Then open terminal, and then enter the following command to install it:
sudo dpkg-i cuda-repo-ubuntu1604.8.0_local_ga2_8.0.61-1_amd64.deb
sudo apt-get update
sudo apt-get install Cuda
After executing the above command, enter the following command to set the environment variable:
Export Path=/usr/local/cuda-8.0/bin: $PATH
This time the installation is successful, restart the computer. In the upper right corner select "About this computer" to see:
This means the installation is successful.
STEP3: Installing CUDNN
CUDNN is a set of GPU acceleration solutions specifically designed for the deep learning framework, and currently supports DL libraries including caffe,convnet, TORCH7, etc. Installation is very simple, in fact, is to perform some copy operations, head files and library files are placed under the system path.
Open the terminal and enter the following command:
sudo cp cuda/include/cudnn.h/usr/local/cuda/include
sudo cp cuda/lib64/libcudnn*/usr/local/cuda/lib64
sudo chmod a+r/usr/local/cuda/lib64/libcudnn*
STEP4: Installing Blas
Blas is a linear algebra library with a large number of programs that have been written for linear algebra calculations. Directly choose to install the Simple atlas:
sudo apt-get install Libatlas-base-dev
STEP5: Installing Python
Many open source libraries now have priority support for C + + and Python, so this is still necessary. Installation is simple, enter the following command:
sudo apt-get install Python-dev
Then install some additional dependencies that the system requires:
sudo apt-get install Libgflags-dev libgoogle-glog-dev Liblmdb-dev
Step6:make Compiling Caffe
On GitHub's Caffe homepage, download the Caffe-master compression pack first. To the download directory to open the terminal, enter:
Unzip Caffe-master.zip
Unzip to get a folder of source code. We go into the Caffe root directory, first modify the configuration file:
CP Makefile.config.example Makefile.config
In fact, it is the Makefile.config.example copy of a generated makefile.config, so that before make. Then open the Makefile.config for modification, where you need to modify:
1. If your computer does not have a GPU, you only need to use the CPU, then cpu_only the previous comment.
# cpu-only switch (uncomment to build without GPU support).
# cpu_only: = 1
Remove the previous # sign.
If you want to use the GPU, no changes are made here. What you need to change at this point is to remove the comments before With_python_layer: = 1, which makes it easy to use PYTHON calls.
# Uncomment to support layers written in Python (would link against Python Libs)
With_python_layer: = 1
Many tutorials go straight to the next step, but don't worry, there are two more places to change, a pit left by HDF5,
2. Find the Include_dirs and change it to the following:
Include_dirs: = $ (python_include)/usr/local/include/usr/include/hdf5/serial
library_dirs: = $ (PYTHON_LIB)/usr/ Local/lib/usr/lib/usr/lib/x86_64-linux-gnu/usr/lib/x86_64-linux-gnu/hdf5/serial
In fact, to include_dirs on the original basis to increase the/usr/include/hdf5/serial, to library_dirs on the original basis to increase the/USR/LIB/X86_64-LINUX-GNU and/usr/lib/ X86_64-linux-gnu/hdf5/serial the two directories.
If you do not perform this step, you will end up with an error in compiling the Make command:
Src/caffe/layers/hdf5_data_layer.cpp:13:18:fatal error:hdf5.h:no such file or directory
compilation terminated.< C1/>makefile:581:recipe for target '. BUILD_RELEASE/SRC/CAFFE/LAYERS/HDF5_DATA_LAYER.O ' failed make
: * * * [. build_ RELEASE/SRC/CAFFE/LAYERS/HDF5_DATA_LAYER.O] Error 1 Make
: * * * waiting for unfinished jobs ....
OK, now that it's done, start compiling:
Make All-j4
Because my Computer CPU is 4 cores, I use 4 of this parameter. This process takes a while, wait patiently, and the compilation is done as follows:
ar-o. Build_release/lib/libcaffe.a ld-o build_release/lib/libcaffe.so.1.0.0 cxx/ld-o. build_ Release/tools/caffe.bin cxx/ld-o. Build_release/tools/compute_image_mean.bin cxx/ld-o. Build_release/tools/convert _imageset.bin cxx/ld-o. Build_release/tools/device_query.bin cxx/ld-o. Build_release/tools/extract_features.bin CXX /ld-o. Build_release/tools/finetune_net.bin cxx/ld-o. Build_release/tools/net_speed_benchmark.bin cxx/ld-o. Build_ Release/tools/test_net.bin cxx/ld-o. Build_release/tools/train_net.bin cxx/ld-o. build_release/tools/upgrade_net_ Proto_binary.bin cxx/ld-o. Build_release/tools/upgrade_net_proto_text.bin cxx/ld-o. Build_release/tools/upgrade_ Solver_proto_text.bin cxx/ld-o. Build_release/examples/cifar10/convert_cifar_data.bin cxx/ld-o. build_release/ Examples/cpp_classification/classification.bin cxx/ld-o. Build_release/examples/mnist/convert_mnist_data.bin CXX/ Ld-o. Build_release/examples/siamese/convert_mnist_siamese_data.bin
Then enter the command:
Make Test-j4
Wait a moment and the following results appear:
ld. BUILD_RELEASE/SRC/CAFFE/TEST/TEST_TANH_LAYER.O
ld. BUILD_RELEASE/SRC/CAFFE/TEST/TEST_THRESHOLD_LAYER.O
LD. BUILD_RELEASE/SRC/CAFFE/TEST/TEST_TILE_LAYER.O
ld. BUILD_RELEASE/SRC/CAFFE/TEST/TEST_UPGRADE_PROTO.O
ld. BUILD_RELEASE/SRC/CAFFE/TEST/TEST_UTIL_BLAS.O
ld. BUILD_RELEASE/SRC/CAFFE/TEST/TEST_DATA_LAYER.O
ld. BUILD_RELEASE/SRC/CAFFE/TEST/TEST_IM2COL_LAYER.O
ld. BUILD_RELEASE/SRC/CAFFE/TEST/TEST_PLATFORM.O
ld. BUILD_RELEASE/CUDA/SRC/CAFFE/TEST/TEST_IM2COL_KERNEL.O
cxx/ld-o. Build_release/test/test_all.testbin src/caffe/test/test_caffe_main.cpp
There are a lot of output files, just a subset of the display results. Then continue to enter the command:
Make Runtest-j4
It takes a long time to wait, and then a bunch of green [RUN OK] results are compiled successfully:
[RUN] Lrnlayertest/3.testforwardacrosschannels
[OK] lrnlayertest/3.testforwardacrosschannels (0 ms)
[RUN] Lrnlayertest/3.testgradientwithinchannel
[OK] Lrnlayertest/3.testgradientwithinchannel (2360 ms)
[RUN ] Lrnlayertest/3.testsetupacrosschannels
[OK] lrnlayertest/3.testsetupacrosschannels (0 ms)
[RUN ] Lrnlayertest/3.testsetupwithinchannel
[OK] Lrnlayertest/3.testsetupwithinchannel (0 ms)
[----------] 8 T ESTs from LRNLAYERTEST/3 (5412 ms total)
[----------] Global test environment Tear-down
[==========] 2041 tests FR Om 267 test Cases ran. (491872 ms Total)
[PASSED] 2041 tests.