Python implements a simple HTTP server program

Source: Internet
Author: User
Tags string back

The main requirement is that, to call the server-side program to obtain the GPU server information and return to the front-end display, then you need to complete a server-side program, get the server's data after the data returned (in JSON format).

Effects such as:


The page does not have content because the service program has not yet started. Complete the server program below:

#!/usr/bin/pythonfrom bottle Import route,run,templateimport osfrom bottle import Get,post,requestimport regpu_info_ Dict = {} @route ('/') @route ('/xiandao ') def func (): Os.system ("./devicequery > Gpu_info") os.system ("Nvidia-smi > > gpu_info ") fs = open (' Gpu_info ', ' r ') i = 1for line in Fs.readlines (): a = Line.strip (). Split (": ") if i = = 7:gpu_info_dict [' device '] = A[-1].strip () elif i = = 9:gpu_info_dict[' cuda_version_number '] = ' "' +a[-1].strip () + '" ' elif i = = 10:gpu_info_ dict[' global_memory ' = ' "' +a[-1].strip () + '" ' elif i = = 11:gpu_info_dict[' total_cores '] = ' "' +a[-1].strip () + '" ' elif i = = 12:gpu_info_dict[' gpu_clock_rate ' = ' "' +a[-1].strip () + '" ' elif i = = 13:gpu_info_dict[' mem_clock_rate '] = ' "' +a[-1]. Strip () + ' ' elif i = = 14:gpu_info_dict[' mem_bus_width '] = ' "' +a[-1].strip () + '" ' elif i = = 19:gpu_info_dict[' constant mem ' ] = ' "' +a[-1].strip () + '" ' elif i = = 20:gpu_info_dict[' shared_mem '] = ' "' +a[-1].strip () + '" ' elif i = = 21:gpu_info_dict[' Registers_available '] = ' "' +a[-1].strip () + '" ' elif i = =50:L1 = Line[19:32].strip (). Split ("/") gpu_info_dict[' power_used '] = ' "' +l1[0].strip () + '" ' gpu_info_dict[' Power_ Capacity '] = ' "' +l1[1].strip () + '" ' L2 = Line[34:55].strip (). "Split ("/") gpu_info_dict[' mem_used '] = '" ' +l2[0].strip () + ' "' gpu_info_dict[' mem_capacity ' = '" ' +l2[1].strip () + ' "' I + = # # Generates a JSON-formatted string and returns JSON =" {"For I in Gpu_info_dict:json + = '" ' +i+ ' "+": "+gpu_info_dict[i]+", "JSON + ="} "return Jsonrun (host= ' 172.16.1.20 ', port=8088,debug=true)

1)bottle is a fast and simple framework for small Web applications (Http://yunpan.cn/cytIgzQXPjeaS (extract: 8e71)).

2) 13/14 rows are callers and commands that will get the GPU information and redirect it to the file gpu_info. Generate the following files:

./devicequery starting ... Cuda device Query (Runtime API) version (Cudart static linking) detected 1 CUDA capable device (s) device 0: "Tesla k40m" CU DA Driver version/runtime version 5.5/5.5 CUDA Capability major/minor version number:3.5 total amount O F Global memory:11520 MBytes (12079136768 bytes) multiprocessors, (192) CUDA cores/mp:2880 CUD                             A cores GPU clock rate:876 MHz (0.88 GHz) Memory Clock rate: 3004 Mhz Memory Bus width:384-bit L2 Cache size:1572864 Bytes Maximum Texture Dimension Size (x, Y, z) 1d= (65536), 2d= (65536, 65536), 3d= (4096, 4096, 4096) Maximum Layere D 1D Texture size, (num) layers 1d= (16384), 2048 layers Maximum layered 2D Texture Size, (num) layers 2d= (16384, 16384)     , 2048 layers total amount of constant memory:65536 bytes All amount of shared memory per block:  49152 Bytes Total number of registers available per block:65536 Warp size:32 Ma  Ximum number of threads per multiprocessor:2048 Maximum number of threads per block:1024 Max dimension size of a thread block (x, Y, z): Max dimension size of a grid size (x, Y, z): (2147483647, 65535, 65535) Ma Ximum memory pitch:2147483647 bytes Texture alignment:512 bytes C   OnCurrent copy and kernel Execution:yes with 2 copy engine (s) Run time limit on Kernels:no Integrated GPU Sharing host Memory:no support Host page-locked Memory mapping:yes Alignment require ment for surfaces:yes device have ECC support:enabled device supports Unified Address ING (UVA): Yes Device PCI Bus id/pci location id:3/0 Compute Mode: < Default (Multiple host T Hreads can use:: CudasetDevice () with device simultaneously) >devicequery, cuda Driver = Cudart, cuda Driver Version = 5.5, Cuda Runtime versio n = 5.5, Numdevs = 1, Device0 = Tesla K40mresult = passtue Jan 21:18:02 +----------------------------------- -------------------+                       |                       Nvidia-smi 5.319.37 Driver version:319.37 | |-------------------------------+----------------------+----------------------+| GPU Name persistence-m| Bus-id Disp.a | Volatile Uncorr. ECC | |         Fan Temp Perf pwr:usage/cap| Memory-usage | Gpu-util Compute M. | |   ===============================+======================+======================|| 0 Tesla k40m Off |                    0000:03:00.0 Off | 0 | |       N/a 27C P0 61w/235w |     69MB/11519MB |                                                                               99% Default |+-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+|  Compute Processes:gpu Memory | | GPU PID Process name Usage | |  =============================================================================|| No running compute processes found |+--------------------------------------------- --------------------------------+

3) Now it is necessary to extract the key information in this file and then form a JSON-formatted string back.


To run the program:



Learn about the nohup command:

nohup" is to let the submitted command ignore the hangup signal and let the program run in the background. nohup is very convenient to use, just add nohup before the command to be processed, Standard output and standard error missing capital are redirected to the Nohup.out file. In general we can add " & " " > filename  2 >&1 "

[[email protected] ~]# nohup ping www.ibm.com &[1] 3059nohup:appending output to ' nohup.out ' [[email protected] ~]# PS -ef |grep 3059root      3059   984  0 21:06 pts/3 00:00:00    ping www.ibm.comroot      3067   984  0 21:06 PTS/3    00:00:00 grep 3059


Port mappings:

ip=172.16.1.20 (ServerA) in run.py can be observed, port=8088. This IP is not directly accessible, requires a springboard machine (ip:http://172.21.7.224) to access, so you need to establish a springboard to servera a mapping, so access to the board of a port is equivalent to access to the ServerA of a port corresponding to the application.


Then, in the case of server program startup, you can access the IP via the Web:

Succeeded in getting the data:)


Yi Solo Show

Email: [Email protected]

Annotated Source: http://blog.csdn.net/lavorange/article/details/42684851


Python implements a simple HTTP server program

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.