Use Xshell+xmanager+pycharm to build Pytorch remote debugging development environment

Source: Internet
Author: User
Tags random seed shuffle pytorch dataloader

1. Related software versions

Xshell:

Xmanager:

Pycharm:

Pycharm Hack server: https://jetlicense.nss.im/

2. Install the appropriate software (Pojie good)

A> starts the Xmanager passive, which is used to accept the X11 that Linux forwards over:

b> setting up Xshell, using SSH tunneling to forward X11 to the Windows machine

Perform the echo $DISPLAY on the server being set, as follows:

After the c> is set up, it is possible to forward the graphical interface in Linux to the Windows machine, for example, using EOG 1.jpg on the Linux command line to display 1.jpg in the window System:

You can see that Linux has successfully forwarded X11 to window through the SSH tunnel, and the server behind X11 is what Xmanager provides:

3. Configure Pycharm to remotely use the Python environment on Linux via the SSH protocol, and to perform remote debugging and code synchronization:

A> Create a new pythonproject, select the location of the project:

B> in File->settings, set up a remote Linux server:

Select Add:, and then fill in the IP address and user name and password of the remote server.

Set the mapping of Windows directories and Linux directories in path mapping, which facilitates code synchronization between Windows and Linux:

On the Settings=>tools=>python scientific page, turn the show plots in tool window off, This is to enable pycharm to properly forward the X11 protocol in Linux to Xmanger:

C> Setting Source code synchronization:

In this context, Pycharm, Xshell, and Xmanager have been set up to interact with each other. Below in Pycharm write a test code, the test code submitted to the Linux machine through Pycharm, and then Linux through the X11 protocol forwarded to Windows, a picture is displayed on Windows, the line.py code is as follows:

ImportMatplotlib.pyplot as PltImportNumPy as Npplt.plot (range (5), linestyle='--', Linewidth=3, color='Purple') Plt.ylabel ('Numbers from 1-5') Plt.xlabel ('Love Yohanna') plt.show ()#PLT.CLF ()#plt.close ()N= 1000x=Np.random.randn (N) y=Np.random.randn (N) z=Np.random.randn (N) plt.scatter (z, y, color='Yellow', linewidths=0.05) Plt.scatter (x, y, color='Green', linewidths=0.2) Plt.axis ([-4, 4,-4, 4]) plt.show ()

Set the environment variables Pycharm run line.py:

Add the display variable to the list of environment variables with a value of localhost:11.0 (specific value, obtained via the Echo $DISPLAY in the Linux shell after setting the Xshell X11 forwarding rule)

Click the Run button to see the effect of drawing in Windows:

4. Setting Pycharm using Emacs mode edit: File->settings->keymap->emacs

5. Build the Pytorch+cuda Environment:

A> installation pytorch use, sudo pip install Pytorch

b> Download and install Cuda:

Https://developer.nvidia.com/cuda-toolkit-archive

The default installation location for Cuda:/usr/local/cuda-8.0

If you encounter an X-dispaly problem during the installation process,

It appears that a X server is running. Please exit X before installation. If you ' re sure that's not running, but was getting this error, please delete any X lock files in/tmp.

Then you can try using /ETC/INIT.D/LIGHTDM stop

Then, after the installation is attempted, the following log is successful:

c> Download and install CUDNN:

Https://developer.nvidia.com/rdp/cudnn-download

LIBCUDNN installation, install the runtime before installing the dev package.

D>

After the successful installation of Cuda and CUDNN, it was found that torch already supported Cuda: torch.cuda.is_available ()---True

6. Now the PYTORCH+CUDA environment has been set up, you can run a simple minst example, first download the Code good torch_minist.py:

#This file would train minist dataset, using Pytorch from __future__ Importprint_functionImportArgparseImportTorchImporttorch.nn as nnImporttorch.nn.functional as FImportTorch.optim as Optim fromTorchvisionImportdatasets, transformsImportPDB#Training SettingsParser = Argparse. Argumentparser (description='pytorch MNIST Example') parser.add_argument ('--batch-size', Type=int, default=64, metavar='N', Help='input Batch size for training (default:64)') parser.add_argument ('--test-batch-size', Type=int, default=1000, metavar='N', Help='input Batch size for testing (default:1000)') parser.add_argument ('--epochs', Type=int, default=10, metavar='N', Help='Number of epochs to train (default:10)') parser.add_argument ('--LR', Type=float, default=0.01, metavar='LR', Help='Learning Rate (default:0.01)') parser.add_argument ('--momentum', Type=float, default=0.5, metavar='M', Help='SGD Momentum (default:0.5)') parser.add_argument ('--no-cuda', action='store_true', default=False, Help='Disables CUDA training') parser.add_argument ('--seed', Type=int, Default=1, metavar='S', Help='random Seed (default:1)') parser.add_argument ('--log-interval', Type=int, default=10, metavar='N', Help='How many batches to wait before logging training status') args=Parser.parse_args () Use_cuda= notArgs.no_cuda andtorch.cuda.is_available () torch.manual_seed (args.seed) device= Torch.device ('Cuda' ifUse_cudaElse 'CPU') Kwargs= {'num_workers': 1,'pin_memory': True}ifUse_cudaElse{}train_loader=Torch.utils.data.DataLoader (Datasets. MNIST ('.. /data', Train=true, download=True, Transform=Transforms.compose ([Transforms. Totensor (), transforms. Normalize ((0.1307,), (0.3081,))]), Batch_size=args.batch_size, Shuffle=true, * *Kwargs) Test_loader=Torch.utils.data.DataLoader (Datasets. MNIST ('.. /data', Train=false, transform=Transforms.compose ([Transforms. Totensor (), transforms. Normalize ((0.1307,), (0.3081,))]), Batch_size=args.batch_size, Shuffle=true, * *Kwargs)classNet (NN. Module):def __init__(self): Super (Net, self).__init__() Self.conv1= nn. conv2d (1, kernel_size=5) Self.conv2= nn. Conv2d (Ten, kernel_size=5) Self.conv2_drop=nn. DROPOUT2D () Self.fc1= nn. Linear (320, 50) SELF.FC2= nn. Linear (50, 10)    defforward (self, x): x= F.relu (f.max_pool2d (SELF.CONV1 (x), 2)) x= F.relu (f.max_pool2d (Self.conv2_drop (Self.conv2 (x)), 2)) x= X.view (-1, 320) x=F.relu (SELF.FC1 (x)) x= F.dropout (x, training=self.training) x=self.fc2 (x)returnF.log_softmax (x, Dim=1) Model=Net (). to (device) optimizer= Optim. SGD (Model.parameters (), LR=ARGS.LR, momentum=args.momentum)defTrain (Epoch):#pdb.set_trace ()Model.train () forBATCH_IDX, (data, target)inchEnumerate (Train_loader): data, Target=data.to (device), target.to (device) Optimizer.zero_grad () output=model (data) loss=f.nll_loss (output, target) Loss.backward () Optimizer.step ()ifBatch_idx% Args.log_interval = =0:Print('Train Epoch: {} [{}/{} ({:. 0f}%)]\tloss: {:. 6f}'. Format (epoch, Batch_idx*len (data), Len (train_loader.dataset),* Batch_idx/Len (Train_loader), Loss.item ()))defTest (): Model.eval () Test_loss=0 Correct=0 with Torch.no_grad (): forData, TargetinchTest_loader:data, Target=data.to (device), target.to (device) output=model (data) Test_loss+ = F.nll_loss (output, target, Size_average=false). Item ()#sum up batch losspred = Output.max (1, keepdim=true) [1]#get the index of the max Log-probabilitycorrect + =Pred.eq (Target.view_as (pred)). SUM (). Item () Test_loss/=Len (test_loader.dataset)Print('\ntest set:average loss: {:. 4f}, accuracy: {}/{} ({:. 0f}%) \ n'. Format (Test_loss, correct, Len (test_loader.dataset),* Correct/Len (Test_loader.dataset ))) forEpochinchRange (1, args.epochs+1): Train (Epoch) test ()

Then set the TORCH_MINST.PY runtime environment variable display=localhost:11.0, click the Run button to see the effect:

Finally, the picture is finished.

Use Xshell+xmanager+pycharm to build Pytorch remote debugging development environment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.