Redis Lock Structure and redis lock

Source: Internet
Author: User

Redis Lock Structure and redis lock
Single thread and isolation

Redis uses a single thread to execute transactions. transactions are run in a serial manner. That is to say, the execution of a single command in Redis and the execution of a transaction are thread-safe and will not affect each other, isolation.

In multi-threaded programming, be careful when accessing shared resources:

import threadingnum = 1lock = threading.Lock()def change_num():    global num    for i in xrange(100000):        #lock.acquire()        num += 5        num -= 5        #lock.release()if __name__ == '__main__':    pool = [threading.Thread(target=change_num) for i in xrange(5)]    for t in pool:        t.start()    for t in pool:        t.join()    print num

When no lock is applied, num cannot be kept as 1.

In Redis, concurrent execution of a single command is highly isolated:

import redisconn = redis.StrictRedis(host="localhost", port=6379, db=1)conn.set('num', 1)def change_num(conn):    for i in xrange(100000):    ┆   conn.incr('num', 5)    ┆   conn.decr('num', 5)if __name__ == '__main__':    conn_pool = [redis.StrictRedis(host="localhost", port=6379, db=1)                 for i in xrange(5)]    t_pool = []    for conn in conn_pool:        t = threading.Thread(target=change_num, args=(conn,))        t_pool.append(t)    for t in t_pool:        t.start()    for t in t_pool:        t.join()    print conn.get('num')

The simulated five clients simultaneously operate on the num value in Redis, and the final result of the num value will be kept as 1:

1real0m46.463suser0m28.748ssys0m6.276s

The atomicity of a single operation and transaction in Redis can be used to do many things. The simplest thing is to implement a global counter.

For example, in the text message Verification Code Service, a user must be allowed to send messages only once in one minute. If a relational database is used, each mobile phone number must be recorded for the time when the user requested the verification code, retrieve and compare with the current time.

In this case, when a user clicks multiple times in a short time, not only increases the pressure on the database, but also causes the problem that both queries meet the conditions, but the SMS sending time of database updates is slow, the text message will be sent again.

To solve this problem in Redis, you only need to use the mobile phone number as the key to create a value with a life cycle of one minute. If the key does not exist, you can send text messages. If the key does not exist, you cannot send text messages:

def can_send(phone):    key = "message:" + str(phone)    if conn.set(key, 0, nx=True, ex=60):    ┆   return True    else:    ┆   return False

For some unnamed functions that restrict access or download five times within 30 minutes, use the user ip address as the key, set the value to the maximum number of times, and set the expiration time to the limit time, you can perform auto-reduction for each user access:

def can_download(ip):    key = "ip:" + str(ip)    conn.set(key, 5, nx=True, ex=600)    if conn.decr(key) >= 0:    ┆   return True    else:    ┆   return False
Redis basic transactions and optimistic locks

Although a single Redis command is atomic, there are more problems when multiple commands are executed in parallel.

For example, to transfer the money of user A to user B, reduce the number of user A accounts and increase the number of B accounts at the same time:

import threadingimport timeimport redisconn = redis.StrictRedis(host="localhost", port=6379, db=1)conn.mset(a_num=10, b_num=10)def a_to_b():    if int(conn.get('a_num')) >= 10:        conn.decr('a_num', 10)        time.sleep(.1)        conn.incr('b_num', 10)    print conn.mget('a_num', "b_num")def b_to_a():    if int(conn.get('b_num')) >= 10:        conn.decr('b_num', 10)        time.sleep(.1)        conn.incr('a_num', 10)    print conn.mget('a_num', "b_num")if __name__ == '__main__':    pool = [threading.Thread(target=a_to_b) for i in xrange(3)]    for t in pool:        t.start()    pool = [threading.Thread(target=b_to_a) for i in xrange(3)]    for t in pool:        t.start()

Running result:

['0', '10']['0', '10']['0', '0']['0', '0']['0', '10']['10', '10']

The total amount of your account may decrease. Although Ms latency is added between auto-increment and auto-increment commands manually, it is very likely that other statements are executed during the execution of the two commands.

Now we need to ensure that the two increase or decrease commands are not disturbed by other commands during execution. Redis transactions can achieve this purpose.

In Redis, all commands surrounded by MULTI and EXEC commands are executed one by one until all commands are executed. After a transaction is completed, Redis will process other commands. That is to say, Redis transactions are atomic.

In python, you can use pipeline to create transactions:

def a_to_b():    if int(conn.get('a_num')) >= 10:    ┆   pipeline = conn.pipeline()    ┆   pipeline.decr('a_num', 10)    ┆   time.sleep(.1)    ┆   pipeline.incr('b_num', 10)    ┆   pipeline.execute()    print conn.mget('a_num', "b_num")def b_to_a():    if int(conn.get('b_num')) >= 10:    ┆   pipeline = conn.pipeline()    ┆   pipeline.decr('b_num', 10)    ┆   time.sleep(.1)    ┆   pipeline.incr('a_num', 10)    ┆   pipeline.execute()    print conn.mget('a_num', "b_num")

Result:

['0', '20']['10', '10'] ['-10', '30']['-10', '30']['0', '20']['10', '10']

We can see that the two statements are indeed executed together, and the total account amount does not change, but there is a negative value. This is because the transaction will not be executed before the exec command is called, so there is a time difference between the transaction execution and the read data, during which the actual data has changed.

To maintain data consistency, we also need to use a transaction command WATCH. WATCH can monitor a key. If the monitored key value changes (replace, update, delete, etc.) before the EXEC command is executed ), the EXEC command returns an error instead of executing it:

>>> Pipeline. watch ('A _ num') True >>> pipeline. multi () >>> pipeline. incr ('A _ num', 10) StrictPipeline <ConnectionPool <Connection 

Add watch to the Code:

def a_to_b():      pipeline = conn.pipeline()      try:      ┆   pipeline.watch('a_num')      ┆   if int(pipeline.get('a_num')) < 10:      ┆   ┆   pipeline.unwatch()      ┆   ┆   return      ┆   pipeline.multi()      ┆   pipeline.decr('a_num', 10)      ┆   pipeline.incr('b_num', 10)      ┆   pipeline.execute()      except redis.exceptions.WatchError:      ┆   pass      print conn.mget('a_num', "b_num")      def b_to_a():      pipeline = conn.pipeline()      try:      ┆   pipeline.watch('b_num')      ┆   if int(pipeline.get('b_num')) < 10:      ┆   ┆   pipeline.unwatch()      ┆   ┆   return      ┆   pipeline.multi()      ┆   pipeline.decr('b_num', 10)      ┆   pipeline.incr('a_num', 10)      ┆   pipeline.execute()      except redis.exceptions.WatchError:      ┆   pass      print conn.mget('a_num', "b_num")

Result:

['0', '20']['10', '10']['20', '0']

The account transfer is successful, but three attempts fail. If you want to make every transaction as successful as possible, you can add the number of attempts or try the time:

def a_to_b():    pipeline = conn.pipeline()    end = time.time() + 5    while time.time() < end:    ┆   try:    ┆   ┆   pipeline.watch('a_num')    ┆   ┆   if int(pipeline.get('a_num')) < 10:    ┆   ┆   ┆   pipeline.unwatch()    ┆   ┆   ┆   return    ┆   ┆   pipeline.multi()    ┆   ┆   pipeline.decr('a_num', 10)    ┆   ┆   pipeline.incr('b_num', 10)    ┆   ┆   pipeline.execute()    ┆   ┆   return True    ┆   except redis.exceptions.WatchError:    ┆   ┆   pass    return False

In this way, Redis can use transactions to implement a lock-like mechanism, but this mechanism is different from the locks of relational databases. When a relational database locks the accessed data rows, other clients attempt to write the locked data rows.

Redis does not lock the data when it executes WATCH. if it finds that the data has been first modified by another client, it only notifies the client that executes the WATCH command and does not stop the modification. This is called optimistic lock.

Build locks with SET ()

Optimistic locks implemented with WPU are generally applicable, but there is a problem that the program will continue to retry to complete a failed transaction. When the load increases, the number of retries increases to an unacceptable level.

To implement the lock correctly, avoid the following situations:

  • Multiple processes get the lock at the same time.
  • The lock-Holding Process crashes before the lock is released, but other processes do not know
  • If the lock is held for a long time, the lock is automatically released, and the process itself does not know, it will try to release the lock.

To implement the lock in Redis, you need to use a command, SET () or SETNX (). SETNX only sets the value for the key if the key does not exist. Now the SET command can also implement this function when the NX option is added, and the expiration time can be SET, it is simply born to build locks.

You only need to set a value with the resource name key to be locked. When you want to obtain the lock, check that the key does not exist. If the resource exists, the resource has been obtained by other processes and needs to be blocked from other processes for release. If the resource does not exist, create a key and obtain the lock:

import timeimport uuidclass RedisLock(object):    def __init__(self, conn, lockname, retry_count=3, timeout=10,):        self.conn = conn        self.lockname = 'lock:' + lockname        self.retry_count = int(retry_count)        self.timeout = int(timeout)        self.unique_id = str(uuid.uuid4())    def acquire(self):        retry = 0        while retry < self.retry_count:            if self.conn.set(lockname, self.unique_id, nx=True, ex=self.timeout):                return self.unique_id            retry += 1            time.sleep(.001)        return False    def release(self):        if self.conn.get(self.lockname) == self.unique_id:            self.conn.delete(self.lockname)            return True        else:            return False

The default number of attempts to obtain the lock is limited to three times, and the result is returned if three attempts fail to be obtained. The lock life cycle is set to 10 s by default. If the lock is not released, the lock will be automatically eliminated after 10 s.

The value set when the lock is obtained is also saved. When the lock is released, the system first checks whether the saved value is the same as the current lock value. If not, it indicates that the lock is automatically released after expiration and is obtained by other processes. Therefore, the lock value must be unique to avoid releasing the lock obtained by other programs.

Use lock:

def a_to_b():    lock = Redlock(conn, 'a_num')    if not lock.acquire():    ┆   return False    pipeline = conn.pipeline()    try:    ┆   pipeline.get('a_num')    ┆   (a_num,) = pipeline.execute()    ┆   if int(a_num) < 10:     ┆   ┆   return False    ┆   pipeline.decr('a_num', 10)     ┆   pipeline.incr('b_num', 10)     ┆   pipeline.execute()    ┆   return True    finally:    ┆   lock.release()

When releasing the lock, you can also use the Lua script to tell Redis: delete this key if and only if this key exists and the value is the value I expected:

    unlock_script = """    if redis.call("get",KEYS[1]) == ARGV[1] then    ┆   return redis.call("del",KEYS[1])    else    ┆   return 0    end"""

You can use conn. eval to run the Lua script:

    def release(self):    ┆   self.conn.eval(unlock_script, 1, self.lockname, self.unique_id)

In this way, a single Redis lock is implemented. We can use this lock to replace the WATCH, or use it together with WPU.

In actual use, the granularity of the lock should also be determined based on the business, whether to lock the entire structure or a small part of the lock structure.

The larger the granularity, the worse the performance, and the smaller the granularity, the higher the chance of deadlock.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.