Python process pool simple instance

Source: Internet
Author: User

Python process pool simple instance
Process pool: when using Python for system management, especially operating multiple file directories at the same time, or remotely controlling multiple hosts, parallel operations can save a lot of time. When the number of objects to be operated is small, you can use the Process in multiprocessing to dynamically generate multiple processes. A dozen processes are fine, but if there are hundreds or thousands of targets, manually limiting the number of processes is too cumbersome. In this case, you can use the process pool function. The Pool can provide a specified number of processes for users to call. When a new request is submitted to the pool, if the Pool is not full, a new process is created to execute the request; however, if the number of processes in the pool has reached the specified maximum value, the request will wait until a process in the pool ends and a new process will be created. How to use the process pool? 1. How do I use the process pool to execute functions? A does not return parameters

#-*-Coding: UTF-8-*-from multiprocessing import Process, Manager, Lock, Pool # function def sayHi (num) to be executed in the call Process Pool ): print "def print result:", num # maximum number of processes in the process Pool p = Pool (processes = 4) # simulate concurrent call thread Pool for I in range (10): p. apply_async (sayHi, [I])

 

Execution result: # python demo. pydef print result: 0def print result: 1def print result: 2def print result: 3def print result: 4def print result: 5apply_async (func [, args [, kwds [, callback]) it is non-blocking, and apply (func [, args [, kwds]) is blocking (to understand the difference, see Example 1 and result 2). 2. Process pool usage ~~
#-*-Coding: UTF-8-*-from multiprocessing import Process, Manager, Lock, Pool # function def sayHi (num) to be executed in the call Process Pool ): print "def print result:", num # maximum number of processes in the process Pool p = Pool (processes = 4) # simulate concurrent call thread Pool for I in range (10): p. apply_async (sayHi, [I])

 

Execution result:
[root@python thread]# python pool.pydef print result: 0def print result: 1def print result: 2def print result: 3def print result: 4def print result: 5[root@python thread]# python pool.pydef print result: 0def print result: 1def print result: 2def print result: 3def print result: 4def print result: 5def print result: 6[root@python thread]# python pool.py[root@python thread]# python pool.py[root@python thread]# python pool.py

 

From the above example, we can see that the pool. py script is executed continuously, but the subsequent script does not output the expected results. Why? First, make minor adjustments to the program:
# -*- coding: UTF-8 -*-from multiprocessing import Process,Manager,Lock,Pooldef sayHi(num):  print "def print result:",nump = Pool(processes=4)for i in range(10):  p.apply_async(sayHi,[i])p.close()

 

P. join () # Call the close function before calling join. Otherwise, an error occurs. No new process is added to the pool after close is executed. The join function waits for all sub-processes to end and returns the result:
[root@python thread]# python pool.pydef print result: 0def print result: 1def print result: 2def print result: 3def print result: 4def print result: 5def print result: 6def print result: 9def print result: 8def print result: 7[root@python thread]# python pool.pydef print result: 0def print result: 1def print result: 2def print result: 4def print result: 3def print result: 5def print result: 6def print result: 7def print result: 8def print result: 9[root@python thread]# python pool.pydef print result: 0def print result: 1def print result: 2def print result: 3def print result: 4def print result: 5def print result: 7def print result: 8def print result: 9

 

There is no problem with this execution. Why is the execution correct after the close () and join () methods are added? Close () closes the pool so that it does not accept new tasks. Terminate () ends the working process and does not process unfinished tasks. The join () main process is blocked and waits for the sub-process to exit. The join method should be used after close or terminate. The original focus is on the join method. If the main process is not blocked, the main process will run down until the end of the process, and the sub-process has not returned results 3. After the process pool is called, parameters will be returned.
#-*-Coding: UTF-8-*-from multiprocessing import Process, Manager, Lock, Pooldef sayHi (num): return num * nump = Pool (processes = 4) # declare a list to store the results returned by each process. result_list = [] for I in range (10): result_list.append (p. apply_async (sayHi, [I]) # append the returned result to the list # Read the returned result for res in result_list: print "the result:", res. get () Note: The get () function returns the value of each returned result.

 

Execution result:
[root@python thread]# python pool.pythe result: 0the result: 1the result: 4the result: 9the result: 16the result: 25the result: 36the result: 49the result: 64the result: 81[root@python thread]# python pool.pythe result: 0the result: 1the result: 4the result: 9the result: 16the result: 25the result: 36the result: 49the result: 64the result: 81[root@python thread]# python pool.pythe result: 0the result: 1the result: 4the result: 9the result: 16the result: 25the result: 36the result: 49the result: 64

 

After returning the result through return, write it to the list and read it again. You will find that the join method is not required in time, and the script will still be displayed normally. To make the code more stable, we recommend that you increase the congestion of the main process (unless the main process needs to wait for the sub-process to return results ):
#-*-Coding: UTF-8-*-from multiprocessing import Process, Manager, Lock, Pooldef sayHi (num): return num * nump = Pool (processes = 4) # declare a list to store the results returned by each process. result_list = [] for I in range (10): result_list.append (p. apply_async (sayHi, [I]) # append the returned result to the p. close () p. join () # Call the close function before calling join. Otherwise, an error occurs. No new process is added to the pool after close is executed. the join function waits for all sub-processes to end. # the results returned by the list are read cyclically for res in result_list: print "the result:", res. get ()

 


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.