Artificial intelligence and virtual currency are the two most important terms of the 2017, as a developer, how to make full use of their expensive hardware equipment. This article will give you a fun solution.
Without the GPU, modern depth learning is unlikely to develop to today's level. Even the simple example algorithm on the Mnist dataset has a difference of 10-100 times the speed of running on the GPU and CPU. However, when you do not optimize all settings, what is the use of the GPU idle power.
Now that we have a powerful computing device, we will inevitably take into account the virtual money mining. In fact it's not that hard, all you need is to register a wallet, choose a currency, set up a mining software and run it. Just Google "How to start digging with the GPU." "You can see a lot of introductory articles that teach you how to do it."
Optimization efficiency
In this article, we need to add another question: How to make mining easier and more automated, and not to disturb my work when I need to get my computer to run the deep learning model at full speed. The ideal solution is to allow the computer to instantly check the resource footprint of the GPU and start digging automatically when no process is in use, and the monitor will issue an order for the computer to stop digging immediately when TensorFlow, Pytorch, or other tools need to start computing.
The problem should be well solved, but I haven't found anything like it on the internet so I've tried to write a GPU monitor myself. It is not only suitable for mining tasks, but also can be used in various other tasks.
Note. Before you begin, you must understand: I hope you do not use this Computational resource optimization method on the computer of the office, I am not responsible for the consequences of any abuse.
Prerequisite
First of all, my project Gpu_mon and source code has been posted on GitHub: Https://github.com/Shmuma/gpu_mon.
It was written by Python 3, then there is no library outside the standard library, but it needs to run on the Linux system, so if your machine is a Windows system, I'm sorry:
Gpu_mon's running logic is as follows: It periodically checks the load on the GPU, and if no process is used, it automatically runs the program you selected in the config file. If another program opens the GPU, the specified program is automatically stopped to free resources. So after you've set up, you just need to turn on the monitor and then use it as usual, with only a few seconds between the time it takes to run the mining program and the full run depth learning model.
To get a list of processes that are connected to the GPU device (assuming it is/dev/nvidia*), we use the Fuser command line tool. In a Debian-based distribution (such as Ubuntu or Debian), the tool is provided by the PSMISC Toolkit-it is part of the system installation, so there is no special repository to install. If you have no fuser in your system (you can check it by running the "which fuser" command on the shell), you need to install it beforehand.
Debugging
The entire configuration of the project is done in a separate configuration file, which is in the INI file format and exists in the ~/.config/gpu_mon.conf directory. The sample configuration file is shown below and can also be found in GitHub.
The configuration file contains four sections:
1. Default global settings [defaults]. There is only one option, which specifies how often the monitoring software checks the amount of GPU device resources. By default, all the GPU in the system is detected once every 10 seconds.
The 2.GPU setting can specify any GPU in the form of a gpu-prefix. You can also use IDs to group different GPU groups (integers on the/dev/nvidiax device file). For each GPU group you can set up a list of programs that will not preempt the mining task. This allows us to deal with tools such as Nvidia-smi, which use GPU devices but should not hinder digging.
3. Mining Process configuration section, you can use the process-prefix to specify one or more. For each project, you can specify the mining procedures that need to be run, the directory in which the program is located, the extent to which the mining program uses GPU resources (this is done by exporting the cuda_visible_devices environment), the log file name, and the mining program output.
4.TTY Monitoring section, he can let you choose pseudo terminal monitoring, prior to the mining process. By default, this option is turned off.
Here is my mining setup file on a dual GPU device:
[Defaults]
; How frequently perform GPU open and TTY checks
interval_seconds=10
; Configuration of GPUs to monitor for external program access. It could be several such sections
[Gpu-all]
; List of comma-separated GPU indices or all to handle all available GPUs
Gpus=all
; comma-separated List of programs which can access GPU and should be ignored
Ignore_programs=nvidia-smi
; Program which'll is started on the GPU during idle time
[Process-0]
Dir=/tmp
Cmd=/var/bin/miner
; List of GPU indices or all to handle all available GPUs. If not all, cuda_visible_devices'll be set
Gpus=0
Log=/var/log/miner-0.log
[Process-1]
Dir=/tmp
Cmd=/var/bin/miner
; List of GPU indices or all to handle all available GPUs. If not all, cuda_visible_devices'll be set
Gpus=1
Log=/var/log/miner-1.log
; Configuration of TTY monitoring
[TTY]
Enabled=false
The tool's settings allow us to control the computing resource footprint of each GPU and individual process in the system. So we can make the depth learning process take up only the first GPU (by exporting cuda_visible_device=0) without disturbing the mining process on the second GPU. But if we want to fully release the resources, the two GPU mining procedures will be closed.
As has been said before, everything is transparent, you should not be in the open/close mining procedures to expend energy, only need to focus on tensorflow and pytorch optimization. Now, you can start using Gpu_mon to get the profit.
Start Gpu_mon automatically
In order for Gpu_mon not to disturb itself, we need to make sure that it will run automatically in the background after the system starts. There are a lot of tools to do this, but I prefer Supervisord, a small process that automatically checks the running program and automatically restarts the program if it finds no response. In order to open the Gpu_mon work, installs the Supervisord, in/etc/supervisor/conf.d/gpu_mon.conf makes the good setting is sufficient.
Here are my settings:
[Program:gpu_mon]
Command=/usr/bin/python3 <path_to_cloned_gpu_mon>/gpu_mon/gpu_mon.py
User=<your username>
Environment=home=<your_home_path>,user=<your_username>
Autostart=true
Autorestart=true
That's it, and then you can restart the Supervisord and check if Gpu_mon is started (command: Supervisorctl status Gpu_mon), and the response should be like this:
root@gpu:/etc/supervisor/conf.d# supervisorctl Status Gpu_mon
Gpu_mon RUNNING pid 1526, uptime, 18:14:27
Multi-user
If Gpu_mon is opened by one user, and another or more users begin to invoke the deep learning software, Gpu_mon cannot close the mining process. This is because the Fuser command is subject to security restrictions-it cannot show a running process to another user's process. If you still need gpu_mon in a multiuser situation (and be careful not to do this with a public computer), you have the following two options:
The root permission is given to Gpu_mon, which is not recommended in any case.
Add the SUID bit to the fuser binary file. In the Ubuntu release, we can do this by running the command Chmod+s/bin/fuser, which effectively lets all users access the file, and through the SUID bit, the binary is started as a file owner certificate, even if it was started by another user. But this still requires you to turn on root permissions.
What kind of virtual currency to dig.
At present, there are a variety of virtual currencies, thanks to Bitcoin's prosperity this year. My personal favorite is based on Equihash currencies, like Zcash and komodo--, which can use a program to dig mines. I'm using a revised version of the EWBF mining program to do the job, which is 10% faster than the original.
Link: Https://mega.nz/#!4iIClQ4D!tbff8HgrT5Pii8yMDXf9eZd5yFSrwOALHnjsaW7NlWU
As mentioned earlier, the Gpu_mon itself will not mine, it only tracks the GPU process, so you can use any CUDA optimized mining procedure.
Make money or not.
Of course make money, but do not expect to use a piece of Nvidia GTX 1080 can make millions of dollars, the complexity of virtual money is growing, but mining income minus the electricity bill is still positive, so why not it. Here's a computational method that can help you estimate the mining income, the complexity of the current currency, transaction costs, and computational resources: https://whattomine.com/(http://whattomine.com/)
Summarize
That's it. If you think Gpu_mon is very interesting, donate one or two bitcoin to the author or GitHub.
Original link: HTTPS://MEDIUM.COM/MLREVIEW/USING-YOUR-IDLE-DEEP-LEARNING-HARDWARE-FOR-MINING-C1B9887491FA