Turn: Ubuntu under the GPU version of the Tensorflow/keras environment to build

Source: Internet
Author: User
Tags theano cuda toolkit keras

http://blog.csdn.net/jerr__y/article/details/53695567

Introduction: This article mainly describes how to configure the GPU version of the TensorFlow environment in Ubuntu system. Mainly include:
-Cuda Installation
-CUDNN Installation
-TensorFlow Installation
-Keras Installation

Among them, Cuda installs this part is the most important, Cuda installs after, whether is tensorflow or other deep learning framework can be easy to configure.

My environment: Ubuntu14.04 + TITAN X (Pascal) + cuda8.0 + cudnn5.0 cudnn5.1+ keras (Thenao | tensorflow)

1. Installation of cuda8.0

Download URL: https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro= Ubuntu

With the GPU, we also need to find ways to use it in our algorithm to improve the speed of learning. Cuda is such a computing platform, after the installation of Cuda, we can use the GPU for complex parallel computing.

Because read the Keras Chinese document, the inside mentioned Pascal architecture graphics card can only choose Cuda 8.0, so I installed 8.0, and it is really no problem. As for the installation, but I am referring to the official website of the instruction manual installed, no problem, here is no longer written.
In fact, many of the online Chinese installation tutorials are in accordance with the instruction manual, such as: CUDA introduction of the Ubuntu system environment under construction (note that this is cuda7.5).

* After installation (the following is provided for the roommate) *

  • To see if the installation was successful
    A. Open the Cuda installation directory and compile the samples

    cd /usr/local/cuda-8.0/samples sudo make -j4# here J4 means to use the CPU four cores simultaneously compile, if your computer is 8 cores, you can use J8, compile speed will become faster

    B. Open the hardware test file after successful compilation

    ./usr/local/cuda-8.0/samples/bin/x86_64/linux/release/deviceQuery
    will see a large number of messages returned, which can be seen in the graphics card model and graphics card computing power, and so on.

  • Add environment variables when you are finished installing:
    A. Add an environment variable in/etc/profile and add it at the end of the file (file to be opened with sudo):

    Path=/usr/local/cuda-8.0/bin: $PATH Export PATH
    After adding the save exit, execute the following command to make the environment variable effective: source/etc/profile

    B. Add the Lib path, and in/etc/ld.so.conf.d/add the file cuda.conf as follows:/usr/local/cuda-8.0/lib64

    Execute the following command immediately:

    sudo ldconfig

2.CUDNN V5CUDNN 5.1The installation

* Note: VUDNN V5 was previously installed, resulting in an error using TensorFlow. Then replaced it with CUDNN 5.1. Depending on the error, if your tensorflow is installed using a binary file, you can simply replace the files in the CUDNN include and lib64 folders with the corresponding/usr/local/cuda/. But since I was installed directly with Pip install TENSORFLOW-GPU, the replacement of the file or error. The solution is to unload the TENSORFLOW-GPU and reinstall it again!! *

Refer To:ubuntu 16.04 CUDA 8 CuDNN 5.1 Installation
Linux replaces CUDNN version
CUDNN V5 The installation of the CUDNN 5.1 is relatively straightforward.

2.1. Download

A. First, you need to register an account and fill out the questionnaire. Log in to let you download, and before downloading is also simple fill in a questionnaire.
B. Download Download CuDNN v5, for Cuda 8.0, and note that it corresponds to the version of Cuda you have installed.

2.2. Install (actually just unzip)

Download it might be a. solitairetheme8 file, first change the suffix to. tgz and then unzip it:

CP cudnn-8.0-linux-x64 -v5.1.solitairetheme8 cudnn< Span class= "Hljs-subst" >-8.0-linux -x64-v5.1 .tgztar -XVF cudnn-8.0-linux-x64 -v5.1.tgz    
    • 1
    • 2

For example, I downloaded the location:/home/common/cudnn/cudnn-8.0-linux-x64-v5.0-ga.tgz

A. Enter the directory

cd /home/common/cudnn/
    • 1

B. Unzip the file and generate a Cuda folder

tar  -zxvf  cudnn-8.0-linux-x64-v5.0-ga.tgz 
    • 1

C. Copy the corresponding file to the corresponding directory of Cuda. (Attention is cuda, not cuda-8.0)

sudo cp cuda/include/cudnn.h /usr/local/cuda/include sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 
    • 1
    • 2

D. Modify the soft connection, in the/usr/local/cuda/lib64/directory, use the L-L command to see the two elder sister situation. If there is an older version of the libcudnn.so inside. The beginning of the file, all deleted, only need to keep the latest libcudnn.so.5.1.3 files can be executed in the following order:

# libcudnn.so.5.1 -> libcudnn.so.5.1.3*sudo ln -s libcudnn.so.5.1.3 libcudnn.so.5.1# libcudnn.so -> libcudnn.so.5.1*sudo ln -s libcudnn.so.5.1 libcudnn.so
    • 1
    • 2
    • 3
    • 4

If the error is that a file already exists (libcudnn.so.5.1 or libcudnn.so), then delete the file first, then execute the two lines above.

E. Permissions to modify files

a+r /usr/local/cuda/include/cudnn.h  /usr/local/cuda/lib64/libcudnn*
    • 1

F. Update the link library:

sudo ldconfig
    • 1
3. Installing TENSORFLOW-GPU

Install Anaconda first. Anaconda use Python for scientific operations, must be the first step to install Anaconda, installed in the future including the various libraries, such as NumPy, Pandas, Seaborn, matplotlib and so on.

The installation step is very simple, direct download, execution on it, not much to say here.

Because the installation of Keras and Theano is relatively easy, there is no problem, so I will not say. About TensorFlow, online a lot of said with the source code to install, in fact, as long as the version of the correct choice to use the source of the installation, or very easy, so be sure to install with their own cuda and CUDNN version corresponding.
For example, I installed Cuda 8.0 and CUDNN V5, according to TensorFlow's official website's instructions.

The GPU version works best with Cuda Toolkit 8.0 and CuDNN v5. Other versions is supported (Cuda Toolkit >= 7.0 and CuDNN >= v3) only if installing from sources.

that is, if it is not Cuda 8.0 and CUDNN V5, you should use the source code to install the line. Fortunately, I installed exactly the Cuda 8.0 and CUDNN V5, so just use Pip install in the right way. Because my is Ubuntu, so only need to execute the following line of command to be done.

$ pip install tensorflow-gpu
    • 1

If all goes well, then your tensorflow GPU version environment is installed, and the content below is not seen, take off directly.

This installs the latest version of TensorFlow, and if you want to install an older version, you can install it in the following way:

(tensorflow)$ pip install --upgrade tfBinaryURL   # Python 2.7(tensorflow)$ pip3 install --upgrade tfBinaryURL  # Python 3.n 
    • 1
    • 2

The following tfbinaryurl is the link to the version you want to install, click here to view:
For example, to install python2.7 tensorflow1.2.1 (other versions change the version number directly), you can execute the following command:

pip install --upgrade https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.2.1-cp27-none-linux_x86_64.whl
    • 1

Where –upgrade says it will overwrite the version you are currently installing. If you do not have sudo permission, then you can install to your own directory, and then add the path to your environment variables can be set by the--target parameter, all other Python libraries can be installed in this way.

pip install --target /home/huangyongye/my_python/ tensorflow-gpu
    • 1

After installing, change your own environment variables:

vim ~/.bashrc
    • 1

Then add your installation path inside the file:

PATH=/home/yongye/my_python:$PATH;export PATH;
    • 1
    • 2

Then execute the following command:

source ~/.bashrc
    • 1
4. Configure keras4.1. Set back end (backend)
vi ~/.keras/keras.json
    • 1

If you use TensorFlow as the backend, write in Keras.json:

{    "image_dim_ordering": "tf",     "epsilon": 1e-07, "floatx": "float32", "backend": "tensorflow"}
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

If you use Teano as the backend, write in Keras.json:

{    "image_dim_ordering": "th",     "epsilon": 1e-07, "floatx": "float32", "backend": "theano"}
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6

I where "image_dim_ordering": "th" means the data format using Theano. About ' th ' and ' tf ' can be consulted: some basic concepts of keras.

4.2 Setting up GPU for on-demand use

Refer To:tensorflow setting memory self-adaptation, memory ratio
If it is TensorFlow, you can set it in your code. Like what:

# 设置tendorflow对显存使用按需增长。import tensorflow as tfconfig  = tf.ConfigProto()config.gpu_options.allow_growth = Truesession = tf.Session(config=config)from keras.models import Sequentialfrom keras.layers.core import Dense, Activationfrom keras.utils import np_utils...
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10

If it is Theano, the user creates a. theanorc file under their home root directory, in which to write:

[global]openmp=false device =  Gpuoptimizer=fast_compile floatX = float32 allow_input_downcast=< Span class= "Hljs-keyword" >true [lib]cnmem = 0.3 [blas] ldflags= -lopenblas[nvcc]fastmath = true     
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12

where Cnmem = 0.3 This line represents 30% of the GPU's memory footprint.

5. Speed comparison

s –keras a small example of getting started with the "Hello World" of the neural network [translate]. The results are as follows:

    • Theano + GPU
      CPU Times:user 7.49 S, sys:100 MS, total:7.59 s
      Wall time:7.61 S

    • TensorFlow + GPU
      CPU Times:user S, sys:7.06 s, total:55 s
      Wall time:25.9 S

    • Theano + CPU
      CPU Times:user 3.77 S, sys:468 MS, total:4.24 s
      Wall time:26.5

In this small task, the speed does not reflect too good. After all, the CPU is also 12 cores, there are 64G of DDR4 memory, all running this small task is pretty fast.

Well, feel a lot of things are not particularly clear, and later to update the understanding.

Turn: Ubuntu under the GPU version of the Tensorflow/keras environment to build

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.