Ubuntu16.04 ultra-low graphics card GTX730 configuration pytorch-gpu + cuda9.0 + cudnn tutorial, gtx730cudnn
I. Preface
Today, I have nothing to do with the configuration of the ultra-low-configuration graphics card GTX730. I think it may be possible to use cuda + cudnn for all the graphics cards. As a result, I checked it on the nvidia official website. It's a pity that I have a large GTX730 ^, so I can use cuda for 730.
There are many blog posts about installing cuda + cudnn + pytorch/tensorflow/caffe on the Internet. What I wrote does not mean that my method is good, I just want to tell you that the best way to install cuda + cudnn is to go to the nvidia official website to view the installation tutorial in English, so you don't have to read it any more, check the English documents for installation! What ?! Can't read English? Where did you put google Translate?The reason why I wrote this blog is the role of the memorandum.
Ii. Install cuda9.0
Follow the cuda installation tutorial to install
Check installation conditions
Lspci | grep-I nvidia check if GPU supports cuda uname-m & cat/etc/* release ensure that your system supports cuda gcc -- version ensure that you have installed gcc uname-r ensure that your kernel version meets the installation requirements (ubuntu16.04 requires a minimum of 4.4)
Install cuda:
Download cuda9.0 from the official website. We recommend that you select deb (network) as the installation method based on your system's options, after the selection, the installation instructions will be provided below, which is very simple. Note that this installation method does not require you to manually install the driver. It will help you install the driver.
sudo dpkg -i cuda-repo-ubuntu1604_9.1.85-1_amd64.debsudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/7fa2af80.pubsudo apt-get updatesudo apt-get install cuda
Add Environment Variables
The shell I use is zsh, so I need to add the following in. zshrc. After saving it, remember source.
export LD_LIBRARY_PATH=/usr/local/cuda-9.0/lib64:/usr/local/cuda-9.0/extras/CUPT/lib64:$LD_LIBRARY_PATHexport CUDA_HOME=/usr/local/cuda-9.0export PATH=/usr/local/cuda-9.0/bin:$PATH
The shell used is bashrc and is added to. bashrc.
Test cuda
Enter nvidia-smi. The GPU display information is displayed. cd/usr/local/cuda/samples/1_Utilities/deviceQuery, sudo make ,'. /deviceQuery, which is executed in sequence. No error is reported and the query is successful.
3. Install cudnn
Go to the cudnn official website to download the corresponding version.
Decompress the downloaded file and run the following command to copy it to the cuda directory:
Sudo cp cuda/include/cudnn. h/usr/local/cuda/include
Sudo cp cuda/lib64/libcudnn */usr/local/cuda/lib64
Sudo chmod a + r/usr/local/cuda/include/cudnn. h/usr/local/cuda/lib64/libcudnn *
4. Install pytorch-gpu
Very simple. Go to the official website to download it. For details, visit the pytroch official website.
V. Summary
I ran a two-tier CNN, and the mnist used for data. At the beginning, I couldn't run, and the memory burst ...... Just as I gave up, I tried to reduce the batchsize by a little and the test data set by a little, and the result was quite a pitfall when I heard someone say that I had installed cuda + tensorflow/caffe, there are also many blog posts on the Internet. When I installed them according to official documents, I found that there were not many pitfalls. So I said it at the beginning and tried not to refer to my blog. I just wrote down the role of a memorandum and tried to refer to the official documents as much as possible.