First, the preface
Today there is nothing to configure a bit of ultra-low-matching graphics card GTX730, I think the graphics card may also be able to use CUDA+CUDNN, the results of the NVIDIA official website, sure enough, I GTX730 ^_^, then my 730 can also use Cuda. introduction of the online installation of Cuda+cudnn+pytorch/tensorflow/caffe blog, I wrote this is not to say how good my method, just want to tell you the best way to install CUDA+CUDNN is to go to Nvidia's official website to view the installation of English tutorial , so see this you can not look down, you check the English literature to install it. What the?! I don't understand English. Where did you put Google translation? The reason I wrote this blog post is the role of the memorandum of understanding. Installation cuda9.0
Follow the Cuda installation tutorial to install the condition of the check installation step by step
Lspci | Grep-i nvidia checks to see if the GPU supports CUDA operations uname-m && cat/etc/*release ensure your system supports CUDA gcc--version Make sure you've installed GCC uname-r ensure your ker Nel version meets installation requirements (minimum of 4.4 ubuntu16.04 requirements)
Install Cuda:
Download cuda9.0 in the official website, according to their own system to select options, recommended the last choice of installation is Deb (network), the feeling is relatively simple, after the selection of the following will be given the installation instructions, very simple. It should be noted that this installation does not require you to manually install the driver, it will help you install the.
sudo dpkg-i cuda-repo-ubuntu1604_9.1.85-1_amd64.deb
sudo apt-key adv--fetch-keys http:// Developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pub
sudo apt-get update
sudo Apt-get Install Cuda
Adding environment variables
The shell I use is zsh, so I want to add the following in. ZSHRC, save the last remember source
Export ld_library_path=/usr/local/cuda-9.0/lib64:/usr/local/cuda-9.0/extras/cupt/lib64: $LD _library_path
Export cuda_home=/usr/local/cuda-9.0
export path=/usr/local/cuda-9.0/bin: $PATH
The shell used is BASHRC, added in. BASHRC
Test Cuda input Nvidia-smi, have GPU display information, success Cd/usr/local/cuda/samples/1_utilities/devicequery,sudo make, './devicequery, execute sequentially, No error, Success three, install CUDNN
Go to CUDNN website to download the corresponding version
Unzip the downloaded file, and then press the following command to copy to the Cuda directory
sudo cp cuda/include/cudnn.h/usr/local/cuda/include
sudo cp cuda/lib64/libcudnn*/usr/local/cuda/lib64
sudo chmod a+r/usr/local/cuda/include/cudnn.h/usr/local/cuda/lib64/libcudnn*
iv. installation of Pytorch-gpu
Very simple, directly to the official website download, there are detailed instructions, Pytroch official website Five, summary I ran a two-story CNN, the data used by the Mnist, a start to run, memory burst ... Just when I give up, try to turn the batchsize smaller, the test data set is also adjusted small, the result became ^_^ listen to others said Cuda+tensorflow/caffe many pits, online blog is also a lot, when the official document installed when they found not many pits. So, at first I said, try not to refer to my blog, I just write down as a memo to the role, as far as possible to refer to the official document good, mutual encouragement