X multi-process

Source: Internet
Author: User
Tags nslookup

#!/usr/bin/env Python3

#-*-Coding:utf-8-*-

‘‘‘

From multiprocessing import Process

Import OS

#子进程要执行的代码

def run_proc (name):

Print (' Run child process%s (%s) ... '% (Name,os.getpid ()))

If __name__== ' __main__ ':

Print (' Parent process%s. '% Os.getpid ())

P=process (target=run_proc,args= (' Test ',))

Print (' child process'll start. ')

P.start ()

P.join ()

Print (' Child process End ')

‘‘‘

#执行结果如下

‘‘‘

Parent process 4152.

Child process would start.

Run Child Process Test (10456) ...

Child process End

‘‘‘

#创建子进程时, you only need to pass in a parameter that executes functions and functions, create a process instance, and start with the start () method, so that the creation process is simpler than fork ().

The #join () method can wait for the child process to end before it continues to run, typically for inter-process synchronization.

‘‘‘

From multiprocessing import Pool

Import OS, Time,random

def long_time_task (name):

Print (' Run task%s (%s) ... '% (Name,os.getpid ()))

Start=time.time ()

Time.sleep (Random.random ())

End=time.time ()

Print (' Task%s runs%0.2f seconds. '% (name, (End-start)))

If __name__== ' __main__ ':

Print (' Parent process%s. '% Os.getpid ())

P=pool (4)

For I in range (5):

P.apply_async (long_time_task,args= (i,))

Print (' wainting for all subprocesses done ... ')

P.close ()

P.join ()

Print (' All subprocesses-done ... ')

‘‘‘

#执行结果如下

‘‘‘

Parent process 9508.

Wainting-subprocesses done ...

Run Task 0 (5084) ...

Run Task 1 (8540) ...

Run Task 2 (6500) ...

Run Task 3 (4756) ...

Task 1 runs 0.78 seconds.

Run Task 4 (8540) ...

Task 4 runs 0.93 seconds.

Task 0 runs 1.92 seconds.

Task 3 runs 2.49 seconds.

Task 2 runs 2.75 seconds.

All subprocesses-Done ...

‘‘‘

The #对POOL对象调用join () method waits for all child processes to complete, must call Close () before calling join (), and cannot continue adding a new process after calling Close ().

#请注意输出的结果, Task 0,1,2,3 is executed immediately, and task 4 waits for the previous task to complete before it executes, because the default size of the pool is 4 on this computer, so at the same time

#执行4个进程. This is a design restriction by the pool and is not a limitation of the operating system. If you change to: P=pool (5), you can run 5 processes at a time.

#由于POOL的默认大小是CPU的核数, if you unfortunately have a 8-core CPU, you will have to submit at least 9 sub-processes to see the wait effect above.

#子进程

#很多时候, a child process is not itself, but an external process. After we have created the child process, we also need to control the input and output of the child process.

The #subprocess module allows us to start a subprocess very conveniently and then control its input and output.

#下面演示如何在Python代码中运行命令nslookup www.python.org, this is the same effect as the command runs directly:

‘‘‘

Import subprocess

Print (' $nslookup www.python.org ')

R=subprocess.call ([' Nslookup ', ' www.python.org '])

Print (' Exit code: ', R)

‘‘‘

#执行结果如下:

‘‘‘

$nslookup www.python.org

Server: dc.gticom.cn

address:172.18.5.251

Non-authoritative response:

Name: python.map.fastly.net

addresses:2a04:4e42:11::223

151.101.72.223

Aliases:www.python.org

Exit code:0

‘‘‘

#如果子进程还需要输入, you can enter by using the Communicate () method:

‘‘‘

Import subprocess

Print (' $ Nslookup ')

P= subprocess. Popen ([' Nslookup '],stdin=subprocess. Pipe,stdout=subprocess. Pipe,stderr=subprocess. PIPE)

Output,err = P.communicate (b ' Set q=mx\npython.org\nexit\n ')

Print (Output.decode (' Utf-8 '))

Print (' Exit code: ', P.returncode)

‘‘‘

#上面的代码相当于在命令行执行命令nslookup, and then manually enter:

‘‘‘

Set Q=MX

python.org

Exit

‘‘‘

#上述代码未运行出来, don't bother.

#进程间通信

#Python multiprocessing Module wraps the underlying mechanism, providing queue, pipes, and many other ways to exchange data.

#以Queue为例, create two child processes in the parent process, one to write data to the queue, and one to read the data from the queue:

#队列是线程间最常用的交换数据的形式. The queue module is a module that provides queued operations.

From multiprocessing import Process,queue

Import Os,time,random

#写数据进程执行的代码

def write (q):

Print (' Process to write:%s '%os.getpid ())

For value in [' A ', ' B ', ' C ']:

Print (' Put%s to queue ... '% value)

Q.put (value) # calls the object's put () method to insert a project at the end of the queue.

Time.sleep (Random.random ()) # Time.sleep (n) postpones the call thread's run, indicating that the process is suspended for n seconds. The random.random is used to generate a random floating-point number between the interval [0,1].

#读数据进程执行的代码:

def read (q):

Print (' Process to read:%s '% os.getpid ())

While True:

Value=q.get (True) # calls the Get () method of the queue object, deletes from the team header and returns an item.

Print (' Get%s from queue. '% value)

If __name__== ' __main__ ':

#父进程创建Queue, and pass to each child process:

Q=queue ()

Pw=process (target=write,args= (q,))

Pr=process (target=read,args= (q,))

#启动子进程pw, write:

Pw.start ()

#启动子进程pr, read:

Pr.start ()

#等待pw结束:

The Pw.join () #join () method can wait for the child process to end before continuing to run, typically for inter-process synchronization.

#pr process is a dead loop, unable to wait for its end, only forcibly terminates:

Pr.terminate ()

#Process, create the class of the process: process ([Group[,target[,name[,args[,kwargs]]]), target indicates the calling object (typically the function name),

#args表示调用对象的位置参数 (Typically, a function's entry) tuple. The Kwargs represents the dictionary of the calling object. Name is an alias. Group is not actually used.

#输出结果如下:

‘‘‘

Process to write:7952

Put A to queue ...

Process to read:13656

Get A from queue.

Put B to queue ...

Get B from the queue.

Put C to queue ...

Get C from queue.

‘‘‘

# under Unix/linux, the multiprocessing module encapsulates the fork () call, so we don't need to focus on the details of the fork (). Because Windows does not have a fork call, multiprocessing needs to ' emulate ' out

#fork的效果, all Python objects of the parent process must be pickle serialized and then passed to the child process, so if multiprocessing fails in Windows downgrade, consider whether the pickle failed.

#在Unix/linux, you can use the fork () call to implement multiple processes.

#要实现跨平台的多进程, you can use the Multiprocessing module

#进程间通信是通过Queue, pipes and other implementations.

X multi-process

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.