Today, the GPU is used to speed up computing, that feeling is soaring, close to graduation season, we are doing experiments, the server is already overwhelmed, our house server A pile of people to use, card to the explosion, training a model of a rough calculation of the iteration 100 times will take 3, 4 days of time, not worth the candle, Just next door there is an idle GPU depth learning server, decided to get started.
Deep learning I was also preliminary contact, decisive choice of the simplest keras to get started, online about TensorFlow and Theano GPU acceleration of the relevant configuration has a lot, but whether it is applicable to Keras is still to be considered, today is a simple test.
First we look at the configuration of the server GPU
Nivdia-smi
As we can see from the block graphics, we need to specify which GPU to experiment in order to avoid competing with the resources of different users.
I'm using Python, and in the beginning part of the code, add:
Import OS
os.environ["Cuda_device_order"]= "pci_bus_id"
os.environ["cuda_visible_devices"]= "0"
You can use the number No. 0 video card, if you want to use multiple video cards can be used:
os.environ["Cuda_visible_devices"]= "0,1,2"
Of course, if you want to allocate more granular GPU usage, you can also write this:
Config = tf. Configproto ()
Config.gpu_options.per_process_gpu_memory_fraction = 0.5 # occupy gpu50% memory
session = TF. Session (Config=config)
I am a deep study of small white, welcome to exchange.