In the last chapter, 5 kinds of data structures are introduced briefly, and some use cases are given. Now it's time to take a look at some advanced, but still common themes and design patterns One, the large O notation (Big O Notation) common time complexity O (1) is considered the fastest, whether we are dealing with 5 elements or 5 million elements, and ultimately we can achieve the same performance. For the Sismember command, the function is to tell us whether a value belongs to a collection, with a time complexity of O (1). Sismember commands are powerful and powerful in part because of their efficient performance characteristics. Many Redis commands have an O (1) Time complexity logarithm of time complexity O (log (N)) is considered the second fastest, and it is processed quickly by allowing the interval of three seconds to be continuously shrunk. Using this "divide and conquer" approach, a large number of elements can be quickly decomposed and completed in several iterations. The time complexity of the Zadd command is O (log (n)), where N is the number of elements in the classification set, then the linear time complexity O (n), and the O (n) operation is required to find in a non-indexed column of a table. The LTrim command has a time complexity of O (n), but in the LTrim command, N is not the number of elements owned by the list, but the number of elements that are deleted. Deleting an element from a list of millions of elements with the LTrim command is faster than removing 10 elements from a list of 1000 elements with the LTrim command, based on the given minimum and maximum value of the tag, Zremrangebyscore command The delete element operation is performed in a classification set, with a time complexity of O (log (N) +m). This may seem a bit messy, by reading the document you can see that n refers to the total number of elements in the cluster, while M is the number of deleted elements, it can be seen that, for performance, the number of deleted elements is likely to be more important than the total number of elements classified as reasonable. (The specific composition of the Zremrangebyscore command is Zremrangebyscore key Max Mix)
For the sort
command, the time complexity is O (N+m*log (M)), and we'll talk more about the details in the next chapter. sort
Judging from the performance characteristics of the command, it can be said that this is the most complex command in Redis.
There are other time-complexity descriptions, including O (n^2) and O (c^n). With the increase of N, the performance will decrease rapidly. In Redis, no command has these types of time complexity.
It is worth pointing out that in Redis, when we find that some operations have a time complexity of O (N), we may find a better way to deal with them.
For big O Notation, I believe that everyone is very familiar with, although the original text is only a simple introduction to the notation, but limited to the individual algorithm knowledge and writing level is limited, this section of the translation of my headache for a long time, the final result is really difficult to satisfy, hope forgive me. )
Second, imitation multi-keyword query often, you will want to query the same value through different keywords, for example, through the user ID to obtain user information, but also want to get user information through the user name, there is a very ineffective solution, is to place the user objects into two string values to set Users:leto xxxx; Set users:9001 XXX can realize the function, but the memory will produce twice times the quantity, and the future maintenance management is also a nightmare redis has actually provided the solution: hash using the hash data structure, we can get rid of the repetitive entanglement: Set users:9001 "{ID: 9001, Email: [Email protected], ...} "
hset users:lookup:email [email protected] 9001
Simply put, by hashing, simulating the search function, to correlate
id = redis.hget(‘users:lookup:email‘, ‘[email protected]‘) 先通过散列 搜索出 ID
user = redis.get("users:{id}") 再通过ID获取数
iii. References and Indexes
We have seen several use cases for value references, including use cases when describing the data structure of a list, and using the hash data structure above to make the query more flexible. When we generalize, we will find that the indexes and references between values and values must be managed manually. Honestly, this can be a bit frustrating, especially when you think of references to related operations such as management, updates, and deletions that must be done manually. In Redis, there is no good solution to this problem.
As we have seen, aggregate data structures are often used to implement such indexes:
sadd friends:leto ghanima paul chani jessica
Each member of this collection is a reference to a Redis string data structure, and each referenced value contains specific information about the user object. So if Chani changed her name, or deleted her account, what should be done? It might be better to understand from the whole circle of friends, and we know that Chni also has her friend Sadd Friends_of:chani Leto Paul If you have something to deal with like above, there is also the cost of processing and storage for additional index values in addition to maintenance costs. This may make you feel a little withdrawn. In the next section, we'll talk about ways to reduce the performance cost of interacting with additional data. Iv. Data interaction and pipelining (Round Trips and pipelining) Many commands can accept one or more parameters, and an associated command can accept more than one parameter. For example, we saw earlier that the Mget command accepts multiple keywords, and then returns a value: Sadd Friends:chani pater Luch Hniredsi also supports pipelining functionality. Typically, when a client sends a request to Redis, it must wait for a Redis reply before sending the next request, and with pipelining, you can send multiple requests without waiting for the redis response. It not only reduces the network overhead, but also gains significant performance improvements. It is worth mentioning that Redis uses memory to arrange commands, so it's a good idea to execute commands in bulk. Five, a transaction each Redis command is atomic, including those that handle multiple things at once. In addition, for the use of multiple commands, Redis supports transactional functionality.
You may not know, but Redis is actually a single-threaded operation, which is why every Redis command is guaranteed to be atomic. When a command is executed, no other commands will run (we'll briefly talk about scaling in a later chapter). This can be especially useful when you are considering a number of commands to do many things. For example:
incr
The command is actually a command followed by get
a command set
.
getset
command to set a new value and return the original value.
setnx
The command first tests whether the keyword exists and sets the value only if the keyword does not exist
While these are useful, it is often necessary to run an atomic set of commands in real-world development. To do this, you first execute the multi
command, followed by all the commands you want to execute (as part of the transaction), and finally execute the command exec
to actually execute the command, or use the discard
command to discard the execution command. What does Redis's transactional capabilities guarantee?
· The commands in the transaction will be executed sequentially · The commands in the transaction will be executed as a single atomic operation (no other client commands will be executed halfway) · Commands in a transaction are either all executed or not executed Finally, Redis allows you to specify a keyword (or multiple keywords) that can be viewed or conditionally applied when a keyword changes. This is used when you need to get a value and the command to run is based on those values, all in one transaction. For the code shown above, we cannot implement our own
incr
command, because once
exec
Commands are called and they will all be executed in one piece. We can't do this:
redis.multi()current = redis.get(‘powerlevel‘)redis.set(‘powerlevel‘, current + 1)redis.exec()
Although Redis is a single-threaded operation, we can run multiple Redis client processes at the same time, and common concurrency problems will occur. Like the code above, in the
get
After running,
set
Before running,
powerlevel
Value may be changed by another Redis client, causing an error. It is therefore necessary to specify that the PowerLevel keyword be observed if a change occurs and the transaction fails back directly.
redis.watch(‘powerlevel‘)current = redis.get(‘powerlevel‘)redis.multi()redis.set(‘powerlevel‘, current + 1)redis.exec()
Before we call
watch
After that, if another client changes the
powerlevel
Value, our transaction will fail to run. If no client changes
powerlevel
Value, the transaction will continue to work. We can run the code in a loop until it works properly. It's very practical. Six, keyword anti-patterns in the next chapter, we will discuss commands that are not specifically related to data structures, some of which are management or interlock tools. However, there is a command that is useful when debugging or tracking bugs: keys. This command requires a pattern and then finds all matching keywords, but because it is matched by a linear scan of all the keywords. So can not be used in the product code, too slow, the consumption is also larger check the bug, such as want to check the account number 1233 beginning of the account keys bug:1233* (* number is a wildcard character)Summary:in conjunction with this chapter and the previous chapter, hopefully you'll get some insight into how to use Redis support (poswer) for actual projects. There are other patterns that allow you to build various types of things, but the real key is to understand the basic data structure. You will be able to understand how these data structures can achieve something beyond your initial perspective.
Redis System learning three, using data structure