Using the specified GPU and GPU memory in TensorFlow

Source: Internet
Author: User
Tags in python
using the specified GPU and GPU memory in TensorFlow

This document is set up using the GPU 3 settings used by the GPU 2 Python code settings used in the 1 Terminal execution Program TensorFlow use of the memory size 3.1 quantitative settings memory 3.2 Set video memory on demand

Reprint please specify the source:

Http://www.cnblogs.com/darkknightzh/p/6591923.html

Reference URL:

Http://stackoverflow.com/questions/36668467/change-default-gpu-in-tensorflow

Http://stackoverflow.com/questions/37893755/tensorflow-set-cuda-visible-devices-within-jupyter
1 Setting the GPU used when the terminal executes the program

If the computer has multiple gpu,tensorflow, it is all used by default. If you want to use only part of the GPU, you can set cuda_visible_devices. When calling a Python program, you can use it (see the first reference URL Franck dernoncourt 's reply):

Cuda_visible_devices=1 python my_script.py

Environment Variable Syntax      Results

cuda_visible_devices=1 only           device 1 would be seen
Cuda_visible_ devices=0,1         DEVICES 0 and 1 would be visible
cuda_visible_devices= "0,1"       same as above, quotation marks is Opti Onal
cuda_visible_devices=0,2,3       DEVICES 0, 2, 3 would be VISIBLE; device 1 is masked
cuda_visible_devices= "" C10/>no GPU would be visible

2 The GPU used in Python code settings

If you want to set the GPU used in Python code (such as when debugging with Pycharm), you can use the following code (see Yaroslav Bulatov 's reply in the second reference URL):

Import os
os.environ["cuda_visible_devices"] = "2"

3 Setting the memory size used by the TensorFlow 3.1 Setting the video memory quantitatively

The default tensorflow is to use the GPU as much video memory as possible. The GPU memory used can be set in the following ways:

Gpu_options = tf. Gpuoptions (per_process_gpu_memory_fraction=0.7)
sess = tf. Session (CONFIG=TF. Configproto (gpu_options=gpu_options))        

The GPU memory size allocated to TensorFlow above is: GPU actual memory *0.7.

You can assign video memory by setting different values as needed.

========================================================================

170703 Update: 3.2 Set video memory on Demand

The above can only be set to a fixed size. If you want to allocate on demand, you can use the Allow_growth parameter (reference URL: http://blog.csdn.net/cq361106306/article/details/52950081):

Gpu_options = tf. Gpuoptions (allow_growth=true)
sess = tf. Session (CONFIG=TF. Configproto (gpu_options=gpu_options))   

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.