Original address: http://antirez.com/post/take-advantage-of-redis-adding-it-to-your-stack.html
Redis differs in many ways from other database solutions: It uses memory to provide primary storage support, and uses hard disks only to make persistent storage; its data model is very unique, with a single line
Ride. Another big difference is that you can use Redis functionality in your development environment, but you don't need to go to Redis.
Steering Redis is certainly also desirable, with many developers using Redis as a preferred database from the outset, but assuming that if your development environment is already built and your application is already running on it, then replacing the database framework is obviously not easy. In addition, in some applications that require a large-capacity dataset, Redis is also not appropriate because its dataset does not exceed the memory available to the system. So if you have large data applications and are primarily read access patterns, then Redis is not the right choice.
What I like about Redis, though, is that you can incorporate it into your system, which solves a lot of problems, such as the tasks that your existing database feels slow to handle. This allows you to optimize by Redis or create new features for the application. In this article, I want to explore how to add Redis to the existing environment, and use its primitive commands to solve some of the common problems encountered in the traditional environment. In these examples, Redis is not a preferred database. 1. Display the latest list of items
The following statement is commonly used to display the most recent items, and as more data is available, there is no doubt that the query will become increasingly slow. The following statement is commonly used to display the most recent items, and as more data is available, there is no doubt that the query will become increasingly slow.
SELECT * from foo WHERE ... Order by Time DESC LIMIT 10
In Web applications, queries such as "list the latest replies" are common, which often leads to scalability issues. This is frustrating because the project was created in this order, but it has to be sorted to output this order.
A similar problem can be solved with redis. For example, one of our web apps wants to list the latest 20 comments posted by users. We have a "show all" link on the latest comment, and you can get more comments when clicked.
We assume that each comment in the database has a unique incremented ID field.
We can use pagination to make the home page and comment page, using the Redis template:
-Each time a new comment is published, we add its ID to a redis list:
Lpush latest.comments <ID>
We crop the list to a specified length, so Redis only needs to save the last 5,000 comments:
LTRIM latest.comments 0 5000
-Each time we need to get the project scope for the latest comment, we call a function to complete (using pseudocode):
FUNCTION get_latest_comments (start,num_items): Id_list = Redis.lrange ("latest.comments",start, start+num_items-1) IF id_list.length < Num_items id_list = sql_db (" SELECT ... Order by Time LIMIT ...") End return Id_list End
What we do here is very simple. Our latest ID in Redis uses the resident cache, which is updated all the time. But we did limit it to no more than 5,000 IDs, so our Get ID function would always ask Redis. You need to access the database only if the Start/count parameter is out of range.
Our system does not "flush" the cache as traditional, and the information in the Redis instance is always consistent. The SQL database (or other type of database on the hard disk) is triggered only when the user needs to get "very far" data, and the home page or the first comment page does not bother the database on the hard drive. Delete and filter
We can use Lrem to delete comments. If the deletion is very small, the other option is to skip directly to the entry of the comment entry and report that the comment no longer exists.
Sometimes you want to attach a different filter to a different list. If the number of filters is limited, you can simply use a different Redis list for each of the different filters. After all, there are only 5,000 items per list, but Redis can use very little memory to handle millions of items. List related
Another common requirement is that the data for various databases is not stored in memory, so the performance of the database is not ideal in order of scoring and real-time updating of these functions that need to be updated almost every second.
Typically, for example, a list of those online games, such as a Facebook game, according to the score you usually want to:
-List Top 100 high score players
-List the current global rankings for a user
These operations are a piece of cake for redis, and even if you have millions of users, there will be millions of new points per minute.
The pattern is this, each time we get a new score, we use this code:
Zadd Leaderboard <score> <username>
You may use UserID to replace username, depending on how you designed it.
The top 100 high score users are very simple: Zrevrange leaderboard 0 99.
The user's global ranking is similar, only need: Zrank leaderboard <username>. Sort by user voting and time
A common variant of the list, like Reddit or hacker news, is sorted by score according to a formula similar to the following:
Score = Points/time^alpha
So the user's vote will be the corresponding to dig out the news, but the time will be according to a certain index will bury the news. Here is our pattern, of course the algorithm is up to you.
The pattern is this, the first to observe those may be the latest items, such as the first page of the 1000 news is a candidate, so we first ignore the other, this is easy to implement.
-Each time a new news post comes up, we add IDs to the list, use Lpush + LTRIM to ensure that only the latest 1000 items are removed.
-A background task gets the list and continues to compute the final score for each of the 1000 news stories. The calculation results are populated by the Zadd command in a new order, and the old news is cleared. The key idea here is that the sort work is done by background tasks. Overdue Item Processing
Another common sort of item is sorted by time. We use Unix time as a score.
The pattern is as follows:
-Each time a new item is added to our Redis database, we add it to the sorted collection. Then we used the time attribute, Current_time and time_to_live.
-Another background task uses zrange ... Scores query sort collection, take out the latest 10 items. If the Unix time is found to have expired, the entry is deleted in the database. Count
Redis is a good counter, thanks to Incrby and other similar commands.
I believe that you have tried many times to add new counters to the database to get statistics or to display new information, but in the end you have to give them up because of write sensitivity.
OK, now use redis don't need to worry anymore. With atomic increments (atomic increment), you can safely add a variety of counts, reset them with getset, or let them expire.
For example: incr user:<id> expire user:<id> 60 You can calculate the number of page views that have recently been paused between pages for no more than 60 seconds. , when the count reaches like 20 o'clock, you can display some banner hints or anything else you want to show.