Python uses multiprocessing to implement one of the simplest distributed job scheduling systems

Source: Internet
Author: User

Python uses multiprocessing to implement one of the simplest distributed job scheduling system Introduction

Python's multiprocessing module not only supports multiple processes, where the managers sub-module also supports the distribution of multiple processes across multiple machines. A service process can act as a dispatcher, distributing tasks across multiple processes of other machines and relying on network traffic.

Think of it, just wondering if you can use this module to implement a simple job scheduling system.

Implement Job

First, create a job class that contains only one job ID attribute for easy testing

job.py

#!/usr/bin/env python# -*- coding: utf-8 -*-class Job:    def __init__(self, job_id):        self.job_id = job_id
Master

Master is used to distribute jobs and display job information that is completed by the run

master.py

#!/usr/bin/env python#-*-Coding:utf-8-*- fromQueueImportQueue fromMultiprocessing.managersImportBasemanager fromJobImportJob class Master:     def __init__(self):        # dispatched Job queueSelf.dispatched_job_queue = Queue ()# completed Job QueueSelf.finished_job_queue = Queue () def get_dispatched_job_queue(self):        returnSelf.dispatched_job_queue def get_finished_job_queue(self):        returnSelf.finished_job_queue def start(self):        # Register the dispatch job queue and the completed job queue on the networkBasemanager.register (' Get_dispatched_job_queue ', Callable=self.get_dispatched_job_queue) Basemanager.register (' Get_finished_job_queue ', Callable=self.get_finished_job_queue)# Listening ports and starting servicesManager = Basemanager (address= (' 0.0.0.0 ',8888), authkey=' Jobs ') Manager.start ()# Get the queue using the method registered aboveDispatched_jobs = Manager.get_dispatched_job_queue () finished_jobs = Manager.get_finished_job_queue ()# Distribute 10 jobs at a time, and continue to distribute 10 jobs after 10 jobs are completedjob_id =0         while True: forIinchRange0,Ten): job_id = job_id +1Job = Job (job_id) print (' Dispatch job:%s '% job.job_id) dispatched_jobs.put (Job) while  notDispatched_jobs.empty (): Job = Finished_jobs.get ( -) Print (' finished Job:%s '% job.job_id) Manager.shutdown ()if__name__ = ="__main__": Master = Master () Master.start ()
Slave

The slave is used to run the Master dispatch job and return the results

slave.py

#!/usr/bin/env python#-*-Coding:utf-8-*-ImportTime fromQueueImportQueue fromMultiprocessing.managersImportBasemanager fromJobImportJob class Slave:     def __init__(self):        # dispatched Job queueSelf.dispatched_job_queue = Queue ()# completed Job QueueSelf.finished_job_queue = Queue () def start(self):        # Register the dispatch job queue and the completed job queue on the networkBasemanager.register (' Get_dispatched_job_queue ') Basemanager.register (' Get_finished_job_queue ')# Connect MasterServer =' 127.0.0.1 'Print' Connect to server%s ... '% Server) Manager = Basemanager (address= (server,8888), authkey=' Jobs ') Manager.connect ()# Get the queue using the method registered aboveDispatched_jobs = Manager.get_dispatched_job_queue () finished_jobs = Manager.get_finished_job_queue ()# Run the job and return the results, this is just a simulated job run, so the returned job is the one received         while True: Job = Dispatched_jobs.get (timeout=1) Print (' Run job:%s '% job.job_id) Time.sleep (1) Finished_jobs.put (Job)if__name__ = ="__main__": slave = Slave () Slave.start ()
Test

Open three Linux terminals respectively, the first terminal runs master, the second and third terminals run slave, and the results are as follows

Master

$ python master.pyDispatchJob1DispatchJob2DispatchJob3DispatchJob4DispatchJob5DispatchJob6DispatchJob7DispatchJob8DispatchJob9DispatchJobTenFinished Job:1Finished Job:2Finished Job:3Finished Job:4Finished Job:5Finished Job:6Finished Job:7Finished Job:8Finished Job:9DispatchJob OneDispatchJob ADispatchJob -DispatchJob -DispatchJob theDispatchJob -DispatchJob -DispatchJob -DispatchJob +DispatchJob -Finished Job:TenFinished Job: OneFinished Job: AFinished Job: -Finished Job: -Finished Job: theFinished Job: -Finished Job: -Finished Job: -DispatchJob +DispatchJob ADispatchJob atDispatchJob -DispatchJob -DispatchJob -DispatchJob -DispatchJob -DispatchJob inDispatchJob -

Slave1

$ python slave.pyConnectto server127.0. 0. 1...RunJob1 RunJob2 RunJob3 RunJob5 RunJob7 RunJob9 RunJob One RunJob - RunJob the RunJob - RunJob + RunJob + RunJob at 

Slave2

$ python slave.pyConnectto server127.0. 0. 1...RunJob4 RunJob6 RunJob8 RunJobTen RunJob A RunJob - RunJob - RunJob - RunJob - RunJob A RunJob - 

Please indicate this address in the form of a link.
This address: http://blog.csdn.net/kongxx/article/details/50883804

Python uses multiprocessing to implement one of the simplest distributed job scheduling systems

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.