Python uses multiprocessing to implement one of the simplest distributed job scheduling systems

Source: Internet
Author: User
Mutilprocess manages the process like a thread, this is the core of mutilprocess, he is very similar to threading, and the utilization of multicore CPUs is much better than threading.

Introduced

Python's multiprocessing module not only supports multiple processes, where the managers sub-module also supports the distribution of multiple processes across multiple machines. A service process can act as a dispatcher, distributing tasks across multiple processes of other machines and relying on network traffic.

Think of it, just wondering if you can use this module to implement a simple job scheduling system.

Realize

Job

First, create a job class that contains only one job ID attribute for easy testing

job.py

#!/usr/bin/env python#-*-coding:utf-8-*-class job:def __init__ (self, job_id): self.job_id = job_id

Master

Master is used to distribute jobs and display job information that is completed by the run

master.py

#!/usr/bin/env python#-*-coding:utf-8-*-from Queue import queuefrom multiprocessing.managers import BaseManagerfrom Jo B Import Job

class Master:

def __init__ (self): # dispatched job Queue Self.dispatched_job_queue = queue () # completed job Queue Self.finished_job_queue = Queue () def get_ Dispatched_job_queue (self): return self.dispatched_job_queuedef get_finished_job_queue (self): return self.finished_ Job_queuedef Start (self): # Register the Dispatch job queue and complete the job queue to the network Basemanager.register (' Get_dispatched_job_queue ', callable=self.get _dispatched_job_queue) basemanager.register (' Get_finished_job_queue ', Callable=self.get_finished_job_queue) # Listen port and start Service Manager = Basemanager (address= (' 0.0.0.0 ', 8888), authkey= ' Jobs ') Manager.start () # Get queue using the method above registered Dispatched_ Jobs = Manager.get_dispatched_job_queue () finished_jobs = Manager.get_finished_job_queue () # Here you can dispatch 10 jobs at a time, and wait until 10 jobs have run out. , continue to distribute 10 jobs job_id = 0while true:for i in range (0, ten): job_id = job_id + 1job = Job (job_id) print (' Dispatch job:%s '% JOB.J ob_id) dispatched_jobs.put (Job) and not Dispatched_jobs.empty (): Job = finished_jobs.get print (' Finished job:%s '% job.job_id) Manager.shutdown () if __name__ = = "__main__": Master = Master () Master.staRT () 

Slave

The slave is used to run the Master dispatch job and return the results

slave.py

#!/usr/bin/env python#-*-coding:utf-8-*-import timefrom Queue import queuefrom multiprocessing.managers import BaseMan Agerfrom Job Import Job

Class Slave:

def __init__ (self): # dispatched job Queue Self.dispatched_job_queue = queue () # completed job Queue Self.finished_job_queue = Queue ()

def start (self):

# Register the Dispatch job queue and complete the job queue on the network basemanager.register (' Get_dispatched_job_queue ') basemanager.register (' Get_finished_job_ Queue ') # connection masterserver = ' 127.0.0.1 ' Print (' Connect to server%s ... '% server) Manager = Basemanager (address= (server, 8888 ), authkey= ' Jobs ') manager.connect () # Get the queue using the method registered above Dispatched_jobs = Manager.get_dispatched_job_queue () finished_ Jobs = Manager.get_finished_job_queue () # Runs the job and returns the results, this is just a simulated job run, so the received job is returned while true:job = Dispatched_jobs.get ( timeout=1) print (' Run job:%s '% job.job_id) time.sleep (1) finished_jobs.put (Job) if __name__ = = "__main__": Slave = Slave () Slave.start ()

Test

Open three Linux terminals respectively, the first terminal runs master, the second and third terminals run slave, and the results are as follows

Master

$ python master.py Dispatch job:1dispatch job:2dispatch job:3dispatch job:4dispatch job:5dispatch job:6dispatch job: 7Dispatch job:8dispatch job:9dispatch job:10finished job:1finished job:2finished job:3finished job:4finished job:  5Finished job:6finished job:7finished job:8finished job:9dispatch job:11dispatch job:12dispatch job:13dispatch Job: 14Dispatch job:15dispatch job:16dispatch job:17dispatch job:18dispatch job:19dispatch job:20finished job:10finishe D job:11finished job:12finished job:13finished job:14finished job:15finished job:16finished job:17finished job:18d Ispatch job:21dispatch job:22dispatch job:23dispatch job:24dispatch job:25dispatch job:26dispatch job:27dispatch Jo B:28dispatch Job:29dispatch job:30

Slave1

Slave2

The above is a small part of the python to introduce the use of multiprocessing to achieve a simple distributed job scheduling system, I hope that everyone has help!

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.