In this link (http://www.linuxidc.com/Linux/2014-10/107501.htm), I configured cuda, there is a powerful GPU, naturally not violent things, so the resources are idle, so configure the convolution neural network to run the program. As for the principle of the convolutional neural network, write it later. I plan to write the usage of the database first, then write the principle, and drive the pursuit of theory with action. Let's not talk much about it. 1. Pre-description about cuda-convnet, originated from
In this link (http://www.linuxidc.com/Linux/2014-10/107501.htm), I configured cuda, there is a powerful GPU, naturally not violent things, so the resources are idle, so configure the convolution neural network to run the program. As for the principle of the convolutional neural network, write it later. I plan to write the usage of the database first, then write the principle, and drive the pursuit of theory with action.
Let's not talk much about it.
1. Pre-Description
About cuda-convnet, originated from a classic paper ①, the paper for ILSVRC-2010 Data experiments, and then published the code used in the experiment, the link is ②. However, the fact is often different from the paper, and the code in link ② cannot reproduce the results in the paper. I found it a long time after I used the Linked Library. I think it is very difficult. I hope that later users will be cautious.
The reason for this problem is that the multi-GPU and dropout features mentioned in this paper are not implemented, and the 8-layer convolutional neural network configuration file in this paper is not provided. In short, it cannot be used directly and needs to be explored by yourself.
Even so, there is always better than none. After all, the convolutional Neural Network implemented by this library is well encapsulated. The contributions of the great gods in this paper are beyond my reach. Give the gods 32 likes.
This article only describes the cuda-convnet and the configuration of the cuda-convnet2, the author of the paper also published other versions of the library, not used, so do not mention the press.
CUDA programming http://www.linuxidc.com/linux/2014-06/103056.htm in Ubuntu 12.04
Ubuntu 12.04 install CUDA-5.5 http://www.linuxidc.com/Linux/2013-10/91101.htm
Install CUDA Development Environment http://www.linuxidc.com/Linux/2012-04/58913.htm on Ubuntu 11.10
Configuration of CUDA environment http://www.linuxidc.com/Linux/2011-12/49874.htm in Fedora 15 System
Install nvidia cuda 11.04 RC2 http://www.linuxidc.com/Linux/2011-10/46304.htm in Ubuntu 4.0
Linux Mint 13/Ubuntu 12.04 configure CUDA 4.2 & OpenCV 2.4.2 method http://www.linuxidc.com/Linux/2013-10/91102.htm
CUDA getting started tutorial http://www.linuxidc.com/Linux/2014-07/104328.htm
2. Cuda-convnet configuration 2.1. Download the source code.
Refer to link ② to download the source code first.
- Svn checkout http://cuda-convnet.googlecode.com/svn/trunk/ cuda-convnet-read-only
svn checkout http://cuda-convnet.googlecode.com/svn/trunk/ cuda-convnet-read-only
The retrieved version is 562.
2.2. install necessary Libraries
Then, install the required library. I am using the ubuntu system. Therefore, the command is
- Sudo apt-get install python-dev python-numpy python-magic python-matplotlib libatlas-base-dev
sudo apt-get install python-dev python-numpy python-magic python-matplotlib libatlas-base-dev
Of course, make sure that you have installed cuda. I installed cuda6.5 in the/usr/local/directory, as shown below:
- $ Ls/usr/local
- Bin cuda cuda-6.5 etc games include lib man sbin share src
$ ls /usr/localbin cuda cuda-6.5 etc games include lib man sbin share src
2.3. Change build. sh
Go to the cuda-convnet-read-only directory you just downloaded and change the configuration path in the build. sh file. As follows:
- # CUDA toolkit installation directory.
- Export CUDA_INSTALL_PATH =/usr/local/cuda
- # Cuda sdk installation directory.
- Export CUDA_SDK_PATH =/usr/local/cuda-6.5/samples/common/inc
- # Python include directory. This shoshould contain the file Python. h, among others.
- Export PYTHON_INCLUDE_PATH =/usr/include/python2.7
- # Numpy include directory. This shoshould contain in the file arrayobject. h, among others.
- Export NUMPY_INCLUDE_PATH =/usr/lib/python2.7/dist-packages/numpy/core/include/numpy
- # ATLAS library directory. This shoshould contain in the file libcblas. so, among others.
- Export ATLAS_LIB_PATH =/usr/lib/atlas-base
- Make $ *
# CUDA toolkit installation directory.export CUDA_INSTALL_PATH=/usr/local/cuda # CUDA SDK installation directory.export CUDA_SDK_PATH=/usr/local/cuda-6.5/samples/common/inc # Python include directory. This should contain the file Python.h, among others.export PYTHON_INCLUDE_PATH=/usr/include/python2.7 # Numpy include directory. This should contain the file arrayobject.h, among others.export NUMPY_INCLUDE_PATH=/usr/lib/python2.7/dist-packages/numpy/core/include/numpy # ATLAS library directory. This should contain the file libcblas.so, among others.export ATLAS_LIB_PATH=/usr/lib/atlas-base make $*
Follow the tutorials on the official website to complete the build. sh configuration and then you can compile it. However, errors may occur. You need to modify the following information.
2.4. Add a header file
If you compile the file directly, the cutil_inline.h header file cannot be found. The reason may be that this header file was originally available. Later, the header file function was implemented in other header files.
In the include sub-folder, enter the cutil_inline.h file and the content.
- # Include "helper_cuda.h"
- # Define cutilCheckMsg (a) getLastCudaError ()
- # Define cutGetMaxGflopsDeviceId () gpuGetMaxGflopsDeviceId ()
- # Define MIN (a, B) (a) <(B )? (A): (B)
#include "helper_cuda.h"#define cutilCheckMsg(a) getLastCudaError(a)#define cutGetMaxGflopsDeviceId() gpuGetMaxGflopsDeviceId()#define MIN(a,b) (a) < (b) ? (a) : (b)
2.5. MakeFile file changes
The MakeFile contains the following lines:
- Primary des: =-I $ (PYTHON_INCLUDE_PATH)-I $ (NUMPY_INCLUDE_PATH)-I. /include-I. /include/common-I. /include/cudaconv2-I. /include/nvmatrix
INCLUDES := -I$(PYTHON_INCLUDE_PATH) -I$(NUMPY_INCLUDE_PATH) -I./include -I./include/common -I./include/cudaconv2 -I./include/nvmatrix
Add the cuda path as follows:
- Primary des: =-I $ (PYTHON_INCLUDE_PATH)-I $ (NUMPY_INCLUDE_PATH)-I $ (CUDA_SDK_PATH)-I. /include-I. /include/common-I. /include/cudaconv2-I. /include/nvmatrix
INCLUDES := -I$(PYTHON_INCLUDE_PATH) -I$(NUMPY_INCLUDE_PATH) -I$(CUDA_SDK_PATH) -I./include -I./include/common -I./include/cudaconv2 -I./include/nvmatrix
Save it.
2.6. The last Library link is incorrect.
After completing the above changes, you can compile the database. However, a library Link error occurs at the end. You don't have to worry about it. comment out the database directly.
On the 332 lines of the common-gcc-cuda-4.0.mk file. Comment directly.
- # LIB + =-lcutil _ $ (LIB_ARCH) $ (LIBSUFFIX)-lshrutil _ $ (LIB_ARCH) $ (LIBSUFFIX)
# LIB += -lcutil_$(LIB_ARCH) $(LIBSUFFIX) -lshrutil_$(LIB_ARCH) $(LIBSUFFIX)
Now, you can complete the cuda-convnet compilation.
For more details, refer to the highlights on the next page.: Http://www.linuxidc.com/Linux/2014-10/107500p2.htm