Python3 learning notes: process distribution example, python3 learning notes

Source: Internet
Author: User

Python3 learning notes: process distribution example, python3 learning notes

I have been learning Python since Liao Da recently. The small example of distributed processes is interesting. Here is a record.

Distributed Process

The multiprocessing module of Python not only supports multi-process, but also supports multi-process distribution on multiple machines. A service process can be used as a scheduler to distribute tasks to multiple other processes and rely on network communication. Because the managers module is well encapsulated, you can easily write distributed multi-process programs without having to know the details of network communication.

Master Server Principle: The Queue is exposed through the network through the managers module, and other machine processes can access the Queue.
The service process starts the Queue, registers the Queue to the network, and writes the task to the Queue. The Code is as follows:

# Task_master.py # coding = UTF-8 # multi-process distributed example # server side from multiprocessing. managers import BaseManagerfrom multiprocessing import freeze_support # server startup error, prompt to reference this package import random, time, queue # Send task queue task_queue = queue. queue () # The queue that receives the result result_queue = Queue. queue () # QueueManagerclass QueueManager (BaseManager) inherited from BaseManager: pass # win7 64 does not seem to support calling the anonymous function lambda under callable. Here we encapsulate def return_task_queue (): global task_queue return task_queuedef return_result_queue (): global result_queue return result_queuedef test (): # register two Queue on the network. The callable parameter is associated with the Queue object # QueueManager. register ('get _ task_queue ', callable = lambda: task_queue) # QueueManager. register ('get _ result_queue ', callable = lambda: result_queue) QueueManager. register ('get _ task_queue ', callable = return_task_queue) QueueManager. register ('get _ result_queue ', callable = return_result_queue) # bind port 5000 and set the verification code 'abc' manager = QueueManager (address = ('123. 0.0.1 ', 5000), authkey = B' abc') # Add the local default IP address 127.0.0.1 # Start Queue manager. start () # server = manager. get_server () # server. serve_forever () print ('start server master') # obtain the network-accessed Queue object task = manager. get_task_queue () result = manager. get_result_queue () # put several tasks in for I in range (10): n = random. randint () print ('put task % d... '% n) task. put (n) # Read the result print ('try get results... ') for I in range (10): r = result. get (timeout = 10) print ('result: % s' % r) # disable manager. shutdown () print ('master exit ') if _ name _ =' _ main _ ': freeze_support () test ()

Run the following command:


In a distributed multi-process environment, operations on the original task_queue cannot be performed directly when a task is added to the Queue, which bypasses the QueueManager encapsulation and must pass the manager. added the Queue interface obtained by get_task_queue.

The task Process Code is as follows:

# Task_worker.py # coding = UTF-8 # multi-process distributed example # non-server: workerimport time, sys, queuefrom multiprocessing. managers import BaseManager # create a similar QueueManagerclass QueueManager (BaseManager): pass # because this QueueManager only obtains the Queue from the network, you can only provide the QueueManager name when registering. register ('get _ task_queue ') QueueManager. register ('get _ result_queue ') # connect to the server, that is, the server running task_master.py server_addr = '2017. 0.0.1 'print ('connect to server % s... '% server_addr) # ensure that the port and Verification Code are exactly the same. m = QueueManager (address = (server_addr, 5000), authkey = B 'abc') # connect to m from the network. connect () # obtain the Queue object task = m. get_task_queue () result = m. get_result_queue () # obtain the task from the task queue and write the result to the result queue for I in range (10): try: n = task. get (timeout = 1) print ('run task % d * % d... '% (n, n) r =' % d * % d = % d' % (n, n, n * n) time. sleep (1) result. put (r) queue t queue. empty: print ('Task queue is empty') # print processing result ('worker exit ')

Run the following command:

The above is all the content of this article. I hope it will be helpful for your learning and support for helping customers.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.