Redis from simple to deep

Source: Internet
Author: User
Tags allkeys failover redis cluster install redis
1. What is redis?

Redis is a nosql (also a huge map) single thread, but it can process concurrent requests per second (data is in the memory)

Using Java to operate redis is similar to the JDBC interface standard for MySQL. There are various implementation classes, which are commonly used by Druid.

For redis, we usually use Jedis (also provides the connection pool jedispool)

In redis, the key is byte [] (string)

Redis data structure (value ):

String, list, set, orderset, hash

Each data structure corresponds to different command statements ~

2. How to Use redis?

Install redis first, run it, introduce dependencies in the POM file, and configure the full qualified name of redis In the Mapper. xml file of the class to be cached by redis. Introduce the redis. properties file of redis (you can use it if you want to change the configuration)

3. Application scenarios:

String:

1. Stores JSON objects, 2 counters, and 3 youku video likes.

List)

1. You can use the redis list to simulate the queue, heap, and stack.

2. likes in the circle of friends (a statement in the circle of friends and a few likes)

Rule: the format of the circle of friends content:

1. Content: User: X: Post: X content for storage;

2. Click like: Post: X: Good list to store the images. (retrieve the corresponding Avatar and display it)

Hash (hashmap)

1. Save the object

2 groups

4. Why is redis single-threaded so fast?

1. data is stored in memory

2. multiplexing I/O is used

3. Single thread

5. Can redis publish and subscribe to messages?

Yes. (then, we can introduce the Sentinel mode (which we will talk about later). This is because every two seconds, the sentinel node will publish its judgment on a node and its own information to a channel, each sentinel subscribes to this channel to obtain information about other sentinel nodes and master-slave nodes, so as to monitor each other and master-slave nodes) compared with many professional Message Queue Systems (such as Kafka and rocketmq), the publishing and subscription of apsaradb for redis is slightly rough. For example, message accumulation and backtracking cannot be achieved. But it is easy enough.

 

 

Redis Problems and Solutions: 6. How can redis achieve data persistence?

Yes. Write Data in the memory to the hard disk asynchronously in two ways: Rdb (default) and aof

RDB persistence principle: triggered by the bgsave command, and then the parent process executes the fork operation to create a sub-process. The sub-process creates an RDB file and generates a temporary snapshot File Based on the parent process memory, after completing the process, replace the original files with atoms (regularly generate a copy of all data snapshots at a time and store them in the hard disk)

Advantage: it is a compact compressed binary file. redis loads RDB to restore data much faster than Aof.

Disadvantage: due to the high cost of generating RDB each time, non-real-time persistence,

Aof persistence principle: after it is enabled, Every time redis executes a command to modify data, it will add this command to the aof file.

Advantage: Real-Time persistence.

Disadvantages: The aof file size gradually increases. You need to perform regular rewriting operations to reduce the file volume and slow loading.

7. In master-slave replication mode, what should I do if the master fails? Redis provides the Sentinel mode (high availability)

What is the Sentinel mode? It means to monitor the master and slave nodes and other Sentel nodes through the sentinel node, and perform failover independently when the master node is found to be faulty.

8. How does the Sentinel mode work? (Available in version 2.8 or higher)

1. Three scheduled monitoring tasks:

1.1 Every 10 s, each s node (sentinel node) will send the info command to the master node and the slave node to obtain the latest topology.

1.2 Every 2 s, each s node will send the S node's judgment on the master node and the information of the current SL node to a channel,

At the same time, each sentinel node will also subscribe to this channel to learn about other s nodes and their judgment on the master node (for objective deprecation basis)

1.3 every one second, each s node sends a ping command to the master node, slave node, and other s nodes for a heartbeat detection (Heartbeat Detection Mechanism) to check whether these nodes are reachable.

2. subjective and objective deprecation:

2.1 subjective deprecation: Perform subjective deprecation for nodes with no valid responses based on the third scheduled task

2.2 objective deprecation: If the master node is the master node, other s nodes will be consulted to determine the master node. More than half of them will be taken offline.

3. Select a sentinel node as the leader for failover. Election method: raft algorithm. Each s node has a vote of consent. When s nodes make a subjective deprecation, they will ask whether other s nodes agree to be the leader. Half of the votes are leaders. Basically, who should go offline objectively and who will become the leader first.

4. Failover (new master node election process ):

9. What is the principle of redis cluster (using virtual Slot Mode and high availability) (similar to the principle of the Sentinel mode, available only in version 3.0 or later )?

1. nodes in the redis cluster can communicate with each other through ping/Pong messages. Messages can not only spread node slot information, but also spread other statuses such as Master/Slave status and node failure. Therefore, fault discovery is implemented through the message transmission mechanism. The main links include subjective offline (pfail) and objective offline (fail)

2. subjective and objective deprecation:

2.1 subjective deprecation: each node in the cluster regularly sends Ping messages to other nodes, and receives Pong messages from the node as responses. If the communication fails, the sending node marks the receiving node as a pfail state.

2.2 objective deprecation: More than half perform objective deprecation on the master node

3. The master node selects a master node as the leader for failover.

4. Failover (the slave node is elected as the new master node)

10. cache update policy (that is, how to ensure consistency between the cache and MySQL )?

10.1 key expiration clearing (timeout removal) Policy

Inertia expiration (similar to lazy loading, this is lazy expiration): only when a key is accessed will the system judge whether the key has expired and clear the expired key. This policy can save CPU resources to the maximum extent, but is very unfriendly to memory. In extreme cases, a large number of expired keys may not be accessed again and will not be cleared, occupying a large amount of memory.

Regular Expiration: a certain number of keys in the expires Dictionary of a certain number of databases are scanned at a specified time, and expired keys are cleared. This policy is a compromise between the first two. By adjusting the time interval of the scheduled scan and the time limit for each scan, you can achieve the optimal balance between CPU and memory resources under different circumstances.

(The expires dictionary stores the expiration time data of all keys with the expiration time set. The key is a pointer to a key in the key space, value indicates the expiration time of the key in milliseconds. The key space is all the keys saved in the redis cluster .)

Q: In such a scenario, I have designed many keys, with an expiration time of 5 minutes and the current memory usage of 50%. But after five minutes, the memory usage is still very high. Why?

Redis uses both the inert expiration and regular expiration policies. Even if the expiration time is reached, some of them are not actually deleted, waiting for the inert deletion.

Why does there need to be regular inertia? In fact, it is very simple, for example, 0.1 million keys will expire, redis is a Ms check wave by default. If he finds that the first 0.1 million items are about to be cleared, the subsequent time is basically all about clearing the memory, which will definitely affect the performance, so he will only partially delete them, remaining inertia

10.2 redis memory elimination policy

Redis memory elimination policy refers to how to process data that requires new writes and requires additional space when redis uses insufficient cache memory.

Noeviction: When the memory is insufficient to accommodate newly written data, an error is reported during the new write operation.

Allkeys-LRU: When the memory is insufficient to accommodate newly written data, remove the least recently used key from the key space.

Allkeys-random: When the memory is insufficient to accommodate newly written data, a key is randomly removed from the key space.

Volatile-LRU: When the memory is insufficient to accommodate newly written data, remove the least recently used key from the key space with an expiration time.

Volatile-random: When the memory is insufficient to accommodate newly written data, a key is randomly removed from the key space with an expiration time.

Volatile-TTL: When the memory is insufficient to accommodate newly written data, keys with an earlier expiration time are preferentially removed from the key space with an expiration time set.

11. cache granularity control?

12. How to Prevent cache penetration?

(Cache penetration refers to querying a data that does not exist at all. The cache layer does not hit, and the storage layer does not hit. However, if there are a large number of requests to query non-existent data, there will be a lot of pressure on the storage layer. If it is a malicious attack, the consequences will be)

12.1: cache null problems:

12.2: bloom filter:

 

Problems with bloom filter: The Code of bloom filter is relatively complicated. At this stage, we do not need it yet. We need to consider what stage to do in the future, it doesn't mean that the system can do all kinds of perfection at once.

13. Bottom Hole optimization?

Cause: redis is becoming more and more distributed, leading to performance degradation. Because key values are distributed to more nodes, whether memcache or redis is distributed, batch operations usually need to be obtained from different nodes. Compared with single-host batch operations, only one network operation is involved, and distributed batch operations will involve multiple network times. That is, the distribution is far from enough.

14. avalanche Optimization

If the cache layer cannot provide services for some reason, all requests will reach the storage layer, and the calling volume of the storage layer will surge, resulting in cascading downtime of the storage layer.

15. hotspot key optimization

The current key is a hot key (such as a popular entertainment news) with a large concurrency.

 



Redis from simple to deep

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.