<redis in action> 5.2.1 storing counters in Redis

Source: Internet
Author: User
Tags sorted by name

Original: https://redislabs.com/ebook/redis-in-action/part-2-core-concepts-2/ Chapter-5-using-redis-for-application-support/5-2-counters-and-statistics/5-2-1-storing-counters-in-redis

As with monitoring applications, getting system information becomes increasingly important over time. Code changes, which can affect how quickly the site responds and how many pages the service has, new campaigns and new users can fundamentally change the number of pages loaded from one site. Many other performance indicators will be changed later. But if we do not record any indicators, it is impossible to know that they have changed or have not known that we are doing better or worse.

When trying to start collecting metrics for viewing and analyzing, we build a counter tool with names (like site Click Counters, sales counters, database query counters may be critical). These counters will store the last 120 samples of various time accuracy (like 1 seconds, 5 seconds, 1 minutes wait), sample quantity and time accuracy can be customized is necessary. The first step in keeping the counter is actually storing the counter itself.

Update a counter

In order to update a counter, we must store counter information, for each counter and its accuracy, like the site hit counter and 5 second accuracy, we use a hash to store the number of clicks that occur within 5 seconds. With start time as key, click Quantity as value. Figure 5.1 picks up some data

When we start using counters, we need to record which counters have been written so that we can clean up the old data. To do this, we need an ordered sequence that allows us to iterate over an iteration and not allow repetition, and we can use a list to combine a set, but that would cost extra code and round-trip redis. Instead, we will use a zset, whose members are the combination of precision and name, and the score (score) is 0. By setting score all to 0,redis will try to sort by score and find that they are all equal and then will be sorted by name, so that we will have a fixed order so that we can browse them more easily, as shown in 5.2

Figure 5.1

Figure 5.2

Now that you know the structure of the counter, how do you do that? Each time precision and name (a key consisting of time precision and name), we will add a reference to a known zset, and increase the count at the appropriate time, the correct hash. Here is the code implementation

#Code Source https://github.com/huangz1990/riacn-code/blob/master/ch05_listing_source.py#L96PRECISION = [1, 5, 60, 300, 3600, 18000, 86400]#AdefUpdate_counter (conn, name, Count=1, now=None):#determine which time slice should perform a self-increment operation by getting the current time. now = Nowortime.time ()#to ensure that subsequent cleanup work is performed correctly, a transactional pipeline is created here. Pipe =Conn.pipeline ()#Create a counter for each precision that we record.      forPrecinchPRECISION:#gets the start time of the current time slice. Pnow = Int (now/prec) *Prec#creates a hash that is responsible for storing the count information. hash ='%s:%s'%(PREC, name)#adds the reference information for the counter to the ordered collection,        #and set its score to 0 so that the cleanup operation can be performed later. Pipe.zadd ('known:', hash, 0)#updates the counter for the given name and precision. Pipe.hincrby ('Count:'+Hash, Pnow, Count) Pipe.execute ()

Update counter information is not too cumbersome, just for each time precision of a zadd and Hincrby operation, get a counter information is also very simple, we use Hgetall to obtain the entire hash of the data, Convert to our time precision and convert the count to a number (which returns a string), sorted by time, and returns this value at the end. Here is the code

#https://github.com/huangz1990/riacn-code/blob/master/ch05_listing_source.py#l123
defGet_counter (conn, name, precision):#gets the name of the key that stores the counter data. hash ='%s:%s'%(precision, name)#Remove the counter data from the Redis. data = Conn.hgetall ('Count:'+hash)#converts the counter data to the specified format. To_return = [] forKey, ValueinchData.iteritems (): To_return.append ((int (key), int (value) )#sort the data and put the old data samples in front. To_return.sort ()returnTo_return

Now that we've done exactly what we're planning to do, we've got the data, sorted through time, and converted back to integers. Next look at how to prevent the counter from saving too much data.

Clearing the old counters

Now we can simply write and read the counter data, but when we update our counter, if we do not clean up, there will be insufficient memory one day. Because of our prior consideration, we have recorded the known counter to known Zset. To clean up the counter, we need to iterate over the list and clean up the old data.

Why not use expire? One limitation of the expire command is that only the entire keys can be applied and we cannot clean up part of the keys. Since we choose the data structure, counter x and time precision y are a single key at any time and we need to clean it regularly. If you are ambitious, you may want to try redefining the data structure in order to use the Redis standard expiration processing.

There are some important things to note when we clean up the counters. Some of the more important things to note are listed below:

    1. You can add a new counter at any time
    2. Multiple cleanup programs can occur at the same time
    3. Every minute we clean up the daily counter is a waste of energy.
    4. If a counter already has no data, we will no longer attempt to clean

For the things we consider, we will build a daemon similar to the one written in Chapter two. As before, we will repeat the loop until the system says to exit. To minimize the system load during the cleanup process, we try to clean up every minute and also clean up the counters that are being created, except those that are more than one minute in frequency. If the time interval for a counter is 5 minutes, we will try to clean up no more than 5 minutes. The more frequent counters (1 seconds and 5 seconds in our example), we clean up every minute.

In order to iterate the counter, we get the known counter by going through the Zrang one-by-one. In order to clean up the counters, we get all the start times for the given counters, calculate those entries before the cutoff time, and remove them. If there is no data for the given counter, we will remove the reference to this counter from the known zset. The explanation is simple, but the code details show a few extreme cases. Check the list to see the details of the cleanup program.

#代码https://github.com/huangz1990/riacn-code/blob/master/ch05_listing_source.py#l138
defclean_counters (conn): Pipe=conn.pipeline (True)#in order to handle multiple counters with different update frequencies equally, the program needs to record the number of times the cleanup operation was performed. Passes =0#The counter is continuously cleaned up until it exits. while notQUIT:#Records the time at which the cleanup operation began executing to calculate the length of time the cleanup operation was executed. Start =time.time ()#iterate through all known counters incrementally. index =0 whileIndex < Conn.zcard ('known:'): #gets the data of the checked counter. hash = Conn.zrange ('known:', index, index) index+ = 1if notHash: BreakHash=Hash[0]#gets the precision of the counter. prec = Int (hash.partition (':') [0])#because the cleanup program loops every 60 seconds, #Therefore, it is necessary to determine whether the counter should be cleaned up according to the frequency of the counter update. Bprec = Int (PREC//60)or1#If this counter does not need to be cleaned in this cycle, #then check the next counter. #(for example, if the cleanup program only loops three times, and the counter is updated every 5 minutes, #then the program does not need to clean up this counter for the time being. ) ifPasses%Bprec:Continuehkey='Count:'+Hash#based on the given precision and the number of samples that need to be retained, #figure out what time we need to keep the sample before. cutoff = Time.time ()-Sample_count *Prec#gets the start time of the sample and converts it from a string to an integer. Samples =map (int, Conn.hkeys (hkey))#calculates the number of samples that need to be removed. samples.sort () Remove=bisect.bisect_right (samples, cutoff)#Remove the count sample as needed. ifRemove:conn.hdel (hkey,*Samples[:remove])#This hash may have been emptied. ifremove = =len (samples):Try: #Monitor the counter hash before attempting to modify it. Pipe.watch (hkey)#Verify that the counter hash is empty, and if so, #then remove it from the ordered set of records of the known counter. if notPipe.hlen (hkey): Pipe.multi () Pipe.zrem ('known:', hash) pipe.execute ()#in the case where a counter is deleted, #The Next loop can use the same index as the current loop. Index-= 1Else: #The counter hash is not empty, #continue to keep it in an ordered set of records that already have counters. Pipe.unwatch ()#There are other programs that add new data to this calculator hash. #It is no longer empty and continues to keep it in an ordered set of records of known counters. exceptRedis.exceptions.WatchError:Pass #to keep the cleanup operation frequency consistent with the frequency of the counter update, #The variables that record the number of loops and the variables that record the duration of the execution are updated. Passes + = 1Duration= min (int (time.time ()-start) + 1, 60) #If this cycle is not depleted for 60 seconds, then hibernate in the rest of the time; #If 60 seconds have been exhausted, then sleep for a second to rest. Time.sleep (Max (60-duration, 1))

As we described earlier, we iterate over the zset of the save counter to find the entries that need to be cleaned up. In each round, we only clean up the data we need to clean up, so we performed the check at the outset. We get the counter data and decide what needs to be cleaned up. After we have cleaned up the old data, verify that there is no legacy data to remove the counter from within Zset. Finally, when all the counters are iterated, we calculate the time for each iteration, and the program will sleep the rest of the time we perform the entire cleanup until the next round of cleanup

UPDATING a counterupdating a COUNTER

<redis in action> 5.2.1 storing counters in Redis

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.