Python-based simple queues for MySQL and cross-process locks

Source: Internet
Author: User

In our long-process application development process, we will inevitably encounter multiple processes accessing the same resource (critical resources) of the situation, you must add a global lock, to achieve synchronous access to resources (only one process access to resources at a time).

As an example:

Suppose we use MySQL to implement a task queue, the process is as follows:

1. Create the job table in MySQL to store the queue tasks as follows:

CREATE TABLE Jobs (
ID auto_increment NOT NULL primary key,
Message text NOT NULL,
Job_status NOT NULL default 0
);

Message is used to store task information, Job_status is used to identify the status of the task, assuming there are only two states, 0: In the queue, 1: Out of queue

2. There is a producer process that puts new data into the job table and queues

Insert into jobs (message) values (' MSG1 ');

3. Assume that there are multiple consumer processes, taking queued information from the job table, to do the following:

SELECT * from the jobs where job_status=0 order by ID ASC limit 1;
Update jobs set Job_status=1 where id =?; --ID is the record ID just obtained

4. If there is no cross-process lock, two consumer processes may fetch duplicate messages at the same time, resulting in a message being consumed multiple times. This situation is what we do not want to see, so we need to implement a cross-process lock.

Here I put out a very good article, you can refer to:

Https://blog.engineyard.com/2011/5-subtle-ways-youre-using-mysql-as-a-queue-and-why-itll-bite-you

========================= Gorgeous split-line =======================================

We have several implementations of the lock implementation across processes:

1. Signal Volume

2. File Lock Fcntl

3. Socket (port number binding)

4. Signal

There are pros and cons of these kinds of ways, overall, the first 2 ways may be a little more, here I do not elaborate, we can go to consult information.

Check the data when found that there is a lock in MySQL implementation, for the performance requirements are not very high application scenarios, large concurrent distributed access may have bottlenecks, links are as follows:

Http://dev.mysql.com/doc/refman/5.0/fr/miscellaneous-functions.html

I implemented a demo with Python, as follows:

File name: glock.py

#!/usr/bin/env python2.7##-*-coding:utf-8-*-## author:yunjianfei# e-mail: [email protected]# Date : 2014/02/25# Desc: #import logging, Timeimport mysqldbclass glock:def __init__ (self, db): Self.db = d b def _execute (self, sql): cursor = self.db.cursor () Try:ret = None Cursor.execute                (SQL) if Cursor.rowcount! = 1:logging.error ("Multiple rows returned in MySQL lock function.") ret = None Else:ret = Cursor.fetchone () cursor.close () Retu RN ret except Exception, Ex:logging.error ("Execute SQL \"%s\ "failed!        Exception:%s ", SQL, str (ex)) Cursor.close () return None def lock (self, LOCKSTR, timeout):            sql = "Select Get_lock ('%s ',%s)"% (Lockstr, timeout) ret = self._execute (SQL) if ret[0] = = 0: Logging.debug ("Another client has previously locked '%s ').", Lockstr) return False elif ret[0] = = 1:logging.debug (" The Lock '%s ' was obtained SUCCESSF            Ully. ", Lockstr) return True else:logging.error (" Error occurred! ") return None def unlock (self, lockstr): sql = "Select Release_lock ('%s ')"% (lockstr) ret = Self._execute ( SQL) if ret[0] = = 0:logging.debug ("The Lock '%s ' The lock is not a released (the lock was not established By this thread). ", Lockstr) return False elif ret[0] = = 1:logging.debug (" The Lock '%s ' the L Ock was released. ", Lockstr) return True else:logging.error (" The Lock '%s ' does not exist. ", LOCKSTR) return none#init loggingdef init_logging (): sh = logging. Streamhandler () logger = Logging.getlogger () logger.setlevel (logging. DEBUG) formatter = logging. Formatter ('% (asctime) s-% (module) s:% (filename) s-l% (Lineno) d-% (levelname) s:% (message) s ') sh.seTformatter (Formatter) logger.addhandler (SH) logging.info ("Current log level is:%s", Logging.getlevelname (Logger.get Effectivelevel ())) def main (): init_logging () db = MySQLdb.connect (host= ' localhost ', user= ' root ', passwd= ') lock_ name = ' queue ' L = Glock (db) ret = L.lock (Lock_name, ten) if ret! = True:logging.error ("Can ' t get lock!        Exit! ")    Quit () Time.sleep (Logging.info) ("You can do some synchronization work across Processes!") # #TODO # # You can does something in here # # L.unlock (lock_name) If __name__ = = "__main__": Main ()

In the main function, l.lock (Lock_name, 10), 10 is the time of the timeout is 10 seconds, if 10 seconds also can not get the lock, return, perform the subsequent operation.

In this demo, where you mark Todo, you can place the logic of the consumer taking messages from the job table. That is, dividing lines above:

3. Assume that there are multiple consumer processes, taking queued information from the job table, to do the following:

SELECT * from the jobs where job_status=0 order by ID ASC limit 1;
Update jobs set Job_status=1 where id =?; --ID is the record ID just obtained

In this way, multiple processes can be guaranteed to access critical resources synchronously, ensuring data consistency.

When testing, start two glock.py, the result is as follows:

Java code
  1. [ @tj -10 -  test]# ./glock.py   
  2. 2014 -03 -14  17 : 08 : 40 , 277  -glock:glock.py-l70-info: current  log level is : debug  
  3. 2014 -03 -14  17 : 08 : 40 , 299  -glock:glock.py-l43-debug: the  lock  ' queue '  was obtained  successfully.  
  4. 2014 -03 -14  17 : 08 : 50 , 299  -glock:glock.py-l81-info: you  can do  some  synchronization work across processes!  
  5. 2014 -03 -14  17 : 08 : 50 , 299  -glock:glock.py-l56-debug: the  lock  ' queue '  the lock was  released.  

You can see that the first glock.py is 17:08:50 unlocked, and the following glock.py acquires the lock at 17:08:50, which confirms that this is entirely possible.

[@tj -10-47 test]#./glock.py
2014-03-14 17:08:46,873-glock:glock.py-l70-info:current Log Level Is:debug
2014-03-14 17:08:50,299-glock:glock.py-l43-debug:the Lock ' queue ' was obtained successfully.
2014-03-14 17:09:00,299-glock:glock.py-l81-info:you can do some synchronization work across processes!
2014-03-14 17:09:00,300-glock:glock.py-l56-debug:the Lock ' queue ' The lock was released.
[@tj -10-47 test]#


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.