"Redis Combat" Reading notes-Chapter II: Building Web Apps with Redis

Source: Internet
Author: User
Tags hash json redis sleep
Welcome to visit my blog View original: Http://wangnan.tech

Login and Cookie caching

Cookies: When we log in to the Internet service, these services use cookies to record our identity, cookies are made up of a small amount of data, and the website asks our browser to store the data and send the data back to the service each time a request for the service occurs.

There are two ways that you can store login information in a cookie, a signature (signed) cookie, or a token cookie, for the cookie used to log in.

The signature cookie usually stores the user name, the user ID, the last time the user was successfully logged in, and other information that the site finds useful, in addition to which the cookie contains a signature. The server can use it to verify that the information that occurs in the browser is altered (such as changing the login user name in the cookie to another user)

The token cookie stores a random set of bytes in the cookie as a token, and the server can find the owner of the token in the database based on the token, and over time the old token will be replaced by the new token


Redis Implementation Token Login Cookie: First, we will use a hash to store the mapping between the login cookie token and the logged-in user, to check if a user is logged in, to find the corresponding user based on the given token, and, if the user is already logged in, Returns the ID of the user

Attempt to get and return the user of the token

def check_token (Conn,token):
return Conn.hget (' login: ', token)

Update tokens: Each time the user browses the page, the program updates the information stored in the login hash and adds the user's token and current timestamp to the ordered collection of the most recently logged-in user, if the user is browsing a product page, The program also adds this item to the ordered collection of items that the user has recently browsed, and trims the ordered set when the number of items recorded exceeds 25.

def update_token (Conn,token,user,item=none):
timestamp = Time.time () //Get current timestamp
Conn.hset (' Login: ', token,user)//maintain the mapping between the token and the user
Conn.zadd (' recent ', Token,timestamp)//record The last time the token appeared
If Item:
Conn.zadd (' Viewed: ' +token,item,timestamp)//record the products the user has browsed
Conn.zremrangebyrank (' Viewed: ' +token,0,-26)//Remove old records, save only 25 items viewed by the user

The memory required to store the session's data will increase over time, need to clean up the old data, save only 10 million, the cleanup program is a loop, check the size of the collection, remove up to 100 old tokens beyond the limit, remove the hash information that logs the user's information, and clear the browsing information. If there is no need to clean up, hibernate for 1 seconds, check (attached: Using Redis expiration time, you can have Redis automatically delete them after a period of time)

QUIT = False
LIMIT = 10000000

DEF clean_sessions (conn):
While not QUIT:
Find out what number of tokens are currently available
Size = Conn.zcard (' recent: ')
The number of tokens does not exceed the limit, hibernate and re-check after
If size <= LIMIT:
Time.sleep (1)
Continue
Gets the token ID that needs to be removed
End_index = min (size-limit,100)
tokens = Conn.zrange (' recent: ', 0,end_index-1)
Build key names for tokens that are being deleted
Session_keys = []
For tokens in tokens:
Session_keys.append (' Viewed: ' +token)
Remove the old tokens
Conn,delete (*session_keys)
Conn.hdel (' Login: ', *tokens)
Conn.zrem (' recent ', *tokens)
Shopping Cart

Each user's shopping cart is a hash, which stores the mapping between the commodity ID and the order quantity of the goods, the work of verifying the quantity of the goods is complicated by the Web application, and what we want to do is to update the shopping cart when the order quantity of the goods is changed.

def add_to_cart (Conn,session,count)
If Count <=0;
Conn.hrem (' Cart: ' +session,item)
Else
Conn.hset (' Cart: ' +session,item,count)
Web Cache
def cache_request (Conn,request,callback):
Call the callback function directly for requests without backing cache
If not Can_cache (conn,request);
return callback (Request)
Replace the request with a simple string build. Easy to find after
Page_key = ' cache: ' +hash_request (Request)
To find a page that is cached
Content = Conn.get (Request)
If the page is not cached yet, the page is generated
If not content:
Content=callback (Request)
Put the newly generated page in the cache
Conn.setex (page_key,content,300)
return content
data Row Cache

In order to deal with the heavy load of promotional activities, we need to cache the data by writing a daemon function that runs continuously, allowing the function to cache the specified rows of data into Redis, and not periodically update the cache, and the data will be converted to JSON stored in the Redis string.

The program uses two zset to record when and where the cache should be updated: The first ordered set is an ordered set, its members are the row IDs of the data rows, and the score is a timestamp, which records when the specified data rows should be cached into Redis, and the second ordered set is the delay Zset, The member is also the row ID of the data row, and the score records the number of seconds that the cache of the specified data row needs to be updated every second

Functions for dispatching caches and terminating caches

def schedule_row_cache (Conn,row_id,delay):
Set the delay value of the data row first
Conn.zadd (' delay ', row_id,delay)
Dispatch data that needs to be cached immediately
Conn.zadd (' Schedule ', Row_id,time.time ())

Functions for complex cache data

DEF cache_rows (conn):
While not QUIT:
Attempting to get the next data row to be cached and the dispatch timestamp for that row, the command returns a list of 0 or a tuple
Next = Conn.zrange (' schedule: ', 0,0,withscores=ture)
now = Time.time ()

If not next or Next[0][1]>row
Time.sleep (. 05)
Continue
row_id = next[0][0]
Get the next scheduled delay time in advance
Delay=conn.zscore (' delay ', row_id)
If delay <=0:
You don't have to cache this line, from the cache he removes
Conn.zrem (' delay: ', row_id)
Conn.zrem (' schedule: ', row_id)
Conn.delete (' INV: ' +row_id)
Continue

Read data rows, update schedule time, and set cache values
row = Inventory.get (row_id)
Conn.zadd (' schedule: ', Row_id,now+delay)
Conn.set (' Inv ' +row_id,json.dump (Row.to_dict ()))
Web Analytics

In the original Update_token.

def update_token (Conn,token,user,item=none):
timestamp = Time.time () //Get current timestamp
Conn.hset (' Login: ', token,user)//maintain the mapping between the token and the user
Conn.zadd (' recent ', Token,timestamp)//record The last time the token appeared
If Item:
Conn.zadd (' Viewed: ' +token,item,timestamp)//record the products the user has browsed
Conn.zremrangebyrank (' Viewed: ' +token,0,-26)//Remove old records, save only 25 items viewed by the user
Conn.zincrby (' Viewed: ' item,-1)

The newly added code records the number of items viewed and sorts the items according to the number of views, and the most viewed items will be placed at the index 0 of the ordered set, with the fewest points
To keep the number of product views up-to-date, we need to periodically trim the length of an ordered set and adjust the values of existing elements so that new popular items can also occupy a place in the leaderboard

DEF rescale_viewed (conn);
While not QUIT:
Delete all items after 20,000
Conn.zremrangebyrank (' viewed ', 0,-20001)
Reduce the number of views to the original general
Conn,zinterstore (' Viewed: '. 5)
Time.sleep (300)

Modify the previous can_cache () function

DEF Can_cache (conn request); 
//Try to get the product ID from the page
item_id = extract_itrm_id (Request)
//Check whether this page is cached already this page is a product page
if not item_id or Is_ dynamic (request);
Return False
//Gets the number of views for the item
Rank = Conn,zrank (' viewed ', item_id)
based on the rank to determine if it needs to be cached
return rank I s not None and rank<10000
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.