Python based on MySQL implementation of simple queues and cross-process lock instances detailed _python

Source: Internet
Author: User

Often in the process of multiple process application development, we will inevitably encounter multiple processes accessing the same resource (critical resource), which requires synchronous access to the resource by adding a global lock (that is, only one process can access the resource at a time).

Here are some examples:

Let's say we use MySQL to implement a task queue, and the implementation process is as follows:

1. Create a job table in MySQL to store the queue tasks as follows:

CREATE TABLE jobs (
  ID auto_increment NOT NULL primary key, message
  text NOT NULL,
  job_status NOT NULL default 0
);

The message is used to store task information, Job_status to identify the status of the task, assuming there are only two states, 0: In the queue, 1: Out of queue

2. There is a producer process that puts new data into the job table and queues it:

INSERT into the jobs (message) values (' MSG1 ');

3. Assuming that there are multiple consumer processes that take queued information from the job table , the actions to be taken are as follows:

SELECT * from jobs where job_status=0 the order by ID ASC limit 1;
Update jobs set Job_status=1 where id =?; --ID is the record ID just obtained

4. If there is no process-spanning lock, two consumer processes may simultaneously fetch duplicate messages, causing a message to be consumed more than once. This is a situation we don't want to see, so we need to implement a lock that spans the process.

========================= Split Line =======================================

When it comes to the implementation of locks across processes, there are several ways we can implement them:

(1) Signal volume
(2) file lock Fcntl
(3) socket (port number binding)
(4) Signal
There are pros and cons in each of these ways, in general, the first 2 ways may be a little more, here I do not elaborate, you can access the data.

When the data is found in MySQL, the implementation of a lock, suitable for performance requirements is not very high application scenarios, large concurrent distributed access may have bottlenecks.

This uses Python to implement a demo, as follows:

FileName: glock.py

#!/usr/bin/env python2.7 #-*-coding:utf-8-*-# Desc: # import logging, time Import MySQLdb class Glock: 
      def __init__ (self, db): self.db = db def _execute (self, sql): cursor = Self.db.cursor () Try: ret = None cursor.execute (sql) if Cursor.rowcount!= 1:logging.error ("Multiple rows returned in my 
        SQL lock function. ") ret = None Else:ret = Cursor.fetchone () cursor.close () return ret except Exception, ex : Logging.error ("Execute SQL \%s\" failed! Exception:%s, SQL, str (ex) Cursor.close () return None def lock (self, LOCKSTR, timeout): sql = "SE" Lect get_lock ('%s ',%s)% (LOCKSTR, timeout) ret = self._execute (SQL) if ret[0] = = 0:logging.debug ("A Nother client has previously locked '%s ', lockstr) return False elif ret[0] = = 1:logging.debug ("the Lock '%s ' is obtained successfully. ", LOCKSTR) return TruE else:logging.error ("Error occurred!")  
    return None def unlock (self, lockstr): sql = "Select Release_lock ('%s ')"% (lockstr) ret = self._execute (SQL) If ret[0] = = 0:logging.debug ("The Lock '%s '" The Lock is not released (the lock wasn't established by this th Read). ", Lockstr) return False elif ret[0] = = 1:logging.debug (" The Lock '%s ' "The lock was released.", Lockstr return True else:logging.error ("The Lock '%s ' did not exist.", Lockstr) return None Init Logging Def init_logging (): sh = logging. Streamhandler () logger = Logging.getlogger () logger.setlevel (logging. DEBUG) formatter = logging. Formatter ('% (asctime) s-% (module) s:% (filename) s-l% (Lineno) d-% (levelname) s:% (message) s ') Sh.setformatter ( Formatter) Logger.addhandler (SH) logging.info ("Current log level is:%s", Logging.getlevelname (Logger.geteffectivele Vel ()) def main (): init_logging () db = MySQLdb.connect (host= ' localhost ', user= ' root ', passwd= ') lock_name = ' queue ' L = Glock (db) ret = L.lock (Lock_name, ten) if RET != True:logging.error ("Can" t get lock! 
    Exit! ") 
  Quit () Time.sleep logging.info ("You can do some synchronization work across Processes!") 

 # #TODO # # Can do something in here # # L.unlock (lock_name) If __name__ = = "__main__": Main ()

In the main function:

L.lock (Lock_name, 10), 10 is to indicate that the timeout time is 10 seconds, if the 10 seconds can not get the lock, return, perform the following operation.

In this demo, where you mark Todo, you can put the logic for the consumer to take messages from the job table. That is, the split line above.

2. Assuming that there are multiple consumer processes that take queued information from the job table, the actions to be taken are as follows:

SELECT * from jobs where job_status=0 the order by ID ASC limit 1;
Update jobs set Job_status=1 where id =?; --ID is the record ID just obtained

In this way, it can ensure the consistency of data when multiple processes access critical resources synchronously.

When testing, start two glock.py, the result is as follows:

[@tj -10-47 test]#./glock.py  
2014-03-14 17:08:40,277-glock:glock.py-l70-info:current Log level Is:debug 
2014 -03-14 17:08:40,299-glock:glock.py-l43-debug:the Lock ' queue ' was obtained successfully. 
2014-03-14 17:08:50,299-glock:glock.py-l81-info:you can do some synchronization work 
across 2014-03-14 17:08:50,299-glock:glock.py-l56-debug:the Lock ' queue ' The lock was released. 

You can see that the first glock.py is 17:08:50 unlocked, and the following glock.py gets the lock at 17:08:50, which confirms that this is entirely feasible.

[@tj -10-47 test]#./glock.py 
2014-03-14 17:08:46,873-glock:glock.py-l70-info:current Log level Is:debug
2014 -03-14 17:08:50,299-glock:glock.py-l43-debug:the Lock ' queue ' was obtained successfully.
2014-03-14 17:09:00,299-glock:glock.py-l81-info:you can do some synchronization work
across 2014-03-14 17:09:00,300-glock:glock.py-l56-debug:the Lock ' queue ' The lock was released.
[@tj -10-47 test]#

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.