Connection Server Windows-xshell xftp SSH
- Connect to a lab server via SSH
- Using SSH connection is no stranger. GitHub and OS classes are often used
- Currently using 192.168.7.169
- Using Tools Xshell and Xftp
- Using Xshell to connect servers and operations, the Ubuntu 16.04 LTS operating system is installed on each node of the server
- Managing Files with Xftp
- Resources:
Xshell+xftp SSH Tunnel Proxy
Xshell connection to Linux server via SSH key and SSH proxy
Mac os-terminal Cyberduck
Because the computer at the lab station is a Mac that can only be re-acquainted with a wave.
- Using terminal to establish an SSH remote connection
- Use Cyberduck to establish SFTP connection management files (consider FileZilla)
- Resources:
How to use SSH to connect to a remote Linux server under Mac (including Cyberduck download)
Use your own terminal SSH feature under Mac
Setting up the environment-virtualenv
- Set up a virtual environment and install packages (you can also consider Anaconda)
Setting up the Environment: virtualenv xxx_py
virtualenv -p python3 xxx_py
Enter the environment:source xxx_py/bin/activate
Exit:deactivate
- Use Tsinghua Mirror
- Temporary use
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple some-package
- Set as Default
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
- Resources:
Tsinghua PyPI Mirror Use Help
VIRTUALENV Introduction and basic use
One of the essential artifacts of Python development: virtualenv
virtualenv-Liaoche's official website
Let the TensorFlow code run on the GPU
- GPU Occupancy Issues
TensorFlow may occupy all GPU resources visible from sight
- To view GPU usage:
gpustat
- In the Python code, add:
os.environ[‘CUDA_VISIBLE_DEVICES‘] = ‘0‘ os.environ[‘CUDA_VISIBLE_DEVICES‘] = ‘0,1‘
- To set up using a fixed GPU:
CUDA_VISIBLE_DEVICES=1 Only device 1 will be seen CUDA_VISIBLE_DEVICES=0,1 Devices 0 and 1 will be visible CUDA_VISIBLE_DEVICES=”0,1” Same as above, quotation marks are optional CUDA_VISIBLE_DEVICES=0,2,3 Devices 0, 2, 3 will be visible; device 1 is masked
When you run the code
CUDA_VISIBLE_DEVICES=0 python3 main.py
- TensorFlow provides two ways to control GPU resources:
- Dynamic application of memory during operation, how much it takes to apply
config = tf.ConfigProto() config.gpu_options.allow_growth = True
gpu_options=tf.GPUOptions(per_process_gpu_memory_fraction=0.4) config=tf.ConfigProto(gpu_options=gpu_options) session = tf.Session(config=config)
TensorFlow Code
There is currently no consideration of whether the GPU or CPU is manually assigned to each part of the code
So we with tf.device(self.device):
wrapped up all the network structures.
and use config = tf.ConfigProto(gpu_options=gpu_options,allow_soft_placement=True)
it to get TensorFlow to assign it himself.
Resources:
TensorFlow setting up GPU and GPU memory usage
TensorFlow using the GPU
TensorFlow GPU Small test
about using the lab server's GPU and running the TensorFlow code