Processes, threads, and multi-process instances of Python

Source: Internet
Author: User
Tags nslookup

What is a process and what is a thread?

The process and the thread are the containing relationship, and the process contains the thread.

A process is the smallest unit of system resource allocation, and a thread is the smallest unit of system task execution.

For example, open Word,word This program is a process, inside the spelling checker, word count, change font and so on is a thread. When the word process starts, the system assigns some resources (CPU, memory, and so on) to the word process, which is taken from the resource pool of the word process when the resource is required for execution by a thread.

For Python's multi-process instances, we can do this with Python's multiprocessing package.

The multiprocessing module provides a process class to represent a processing object, and the following example demonstrates starting a child process and waiting for it to end:

 fromMultiprocessingImportProcessImportOS#code to be executed by the child processdefRun_proc (name):Print('Run Child process%s (%s) ...'%(name, Os.getpid ()))if __name__=='__main__':    Print('Parent process%s.'%os.getpid ()) P= Process (Target=run_proc, args= ('Test',)) Print('Child process would start.') P.start () P.join ()Print('Child process end.')

Execution Result:

Parent process 928. Process would start. Run Child process Test (929) ... Process end.

When you create a child process, you only need to pass in a parameter that executes functions and functions, create a process instance, and start with the start () method, so that the creation process is simpler than fork (). The join () method can wait for the child process to end before continuing to run, typically for inter-process synchronization.

If you want to start a large number of child processes, you can create the child processes in batches using the process pool:

 fromMultiprocessingImportPoolImportOS, time, RandomdefLong_time_task (name):Print('Run Task%s (%s) ...'%(name, Os.getpid ())) Start=time.time () time.sleep (Random.random ()* 3) End=time.time ()Print('Task%s runs%0.2f seconds.'% (name, (End-start )))if __name__=='__main__':    Print('Parent process%s.'%os.getpid ()) P= Pool (4)     forIinchRange (5): P.apply_async (long_time_task, args=(i,))Print('waiting-subprocesses done ...') P.close () P.join ()Print('All subprocesses is done .')

The results of the implementation are as follows:

Parent process 669 for all subprocesses done ... Run Task 0 (6711 (6722 (6733 (6742 runs 0.144 (6731 runs 0.27< c9>3 runs 0.861.414 runs 1.91 seconds. All subprocesses is done.

Code interpretation: Calling the Join () method on the pool object waits for all child processes to complete, must call Close () before calling join (), and cannot continue adding a new process after calling Close ().

Note that the result of the output, task 0,1,2,3 is executed immediately, and task 4 waits for the previous task to complete before it executes, because the default size of pool is 4 on my computer, so up to 4 processes are executed at the same time. This is a design restriction by the pool and is not a limitation of the operating system. If you change to a P=pool (5), you can run 5 processes

Child process

Many times, a child process is not itself, but an external process. After we have created the child process, we also need to control the input and output of the child process.

subprocessThe module allows us to start a subprocess very conveniently and then control its input and output.

The following example shows how to run a command in Python code nslookup www.python.org , which works just like the command line:

Import subprocess Print ('$ nslookup www.python.org'= Subprocess.call (['nslookup ' ' www.python.org ' ])print('Exit code:', R)

Operation Result:

$ nslookup www.python.orgServer:         192.168.19.4Address:    192.168.19.4#Non-authoritative answer: www.python.org    = python.map.fastly.net.Name:    199.27.79.223Exit code:0

If the child process also needs input, it can be communicate() entered by method:

ImportsubprocessPrint('$ nslookup') P= subprocess. Popen (['nslookup'], stdin=subprocess. PIPE, Stdout=subprocess. PIPE, stderr=subprocess. PIPE) output, err= P.communicate (b'Set q=mx\npython.org\nexit\n')Print(Output.decode ('Utf-8'))Print('Exit Code:', P.returncode)

The above code is equivalent to executing the command at the command line and nslookup then manually entering:

Set q=mxpython.orgexit

The results of the operation are as follows:

$ nslookupserver:         192.168.19.4Address:    192.168.19.4#Non-authoritative answer: python.org    =from: mail.python.org    = 82.94.164.166  mail.python.org    2001:888:2000:d:: A6exit code:0

Inter-process communication

ProcessThere is definitely a need for communication, and the operating system provides many mechanisms for communicating between processes. The Python multiprocessing module wraps the underlying mechanism, providing, and Queue Pipes so on, a variety of ways to exchange data.

QueueFor example, we create two sub-processes in the parent process, one to Queue write the data, and one to Queue read the data from the inside:

 fromMultiprocessingImportProcess, QueueImportOS, time, Random#code to write the data Process execution:defWrite (q):Print('Process to write:%s'%os.getpid ()) forValueinch['A','B','C']:        Print('Put%s to queue ...'%value) q.put (value) time.sleep (Random.random ())#read the code that the data process executes:defRead (q):Print('Process to read:%s'%os.getpid ()) whileTrue:value=q.get (True)Print('Get%s from queue.'%value)if __name__=='__main__':    #The parent process creates a queue and passes it to each child process:Q =Queue () PW= Process (Target=write, args=(q,)) PR= Process (Target=read, args=(q,))#Start child process PW, write:Pw.start ()#start child process PR, read:Pr.start ()#wait for PW to end:Pw.join ()#PR process is a dead loop, can not wait for its end, can only forcibly terminate:Pr.terminate ()

The results of the operation are as follows:

Process to write:5056350564 from the from from queue.

Under Unix/linux, the multiprocessing module encapsulates the fork() call so that we don't need to focus on fork() the details. Since Windows is not fork called, therefore, the multiprocessing need to "emulate" the fork effect, all Python objects of the parent process must be serialized through pickle and then passed to the child process, all, if multiprocessing the Windows downgrade fails, First consider whether the pickle failed.

Processes, threads, and multi-process instances of Python

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.