Nginx+redis+ehcache large high-concurrency high-availability three-tier architecture summary

Source: Internet
Author: User
Tags redis cluster

In a production environment, for high concurrency architectures, we know that caching is the most important link for a lot of high concurrency. Can be implemented using a three-tier cache architecture, which is Nginx+redis+ehcache

For the middleware nginx often to do the traffic distribution, colleague Nginx itself also has its own caching mechanism, but, the capacity is also limited, we can use to cache hot spot data, let the user's request go directly cache and return, thereby reducing the flow to the server

One: Template engine

Typically, you can use a template engine such as freemaker/velocity to resist a request for review

A small system may render all pages directly on the server and put them into the cache, and then the same page request can be returned directly, without querying the data source or doing data logic processing

Two: double nginx to increase your hit rate

For the deployment of multiple nginx, if you do not include some data routing policy, it may cause each nginx cache hit rate is very low, so you can deploy two-tier nginx

The distribution layer Nginx is responsible for the traffic distribution logic and policy, according to some of its own rules, such as according to PRODUCTLD (product representation) hash, and then the back-end nginx data access fixed routing to an nginx back-end server up, Back-end Nginx used to cache some hot data to their own buffer;

User's request, in Nginx did not cache the corresponding data, then into the Redis cache, Redis can do a full data cache, through horizontal expansion can increase high concurrency, high availability,

One: Persistence mechanism

The so-called persistence mechanism is to talk about Redis in-memory data persisted to disk, and then you can periodically tell the disk file upload to S3 or the cloud storage service up

If you use both the RDB and the AOF two persistence mechanism, the AOF will be used to reconstruct the data when the Redis restarts, because the data in the AOF is more perfect, it is recommended to open both persistence mechanisms, with aof to ensure that the data is not lost, as the first choice for data recovery , and the RDB is used to make different degrees of cold preparation, fast data recovery when the AoF file is lost or corrupted.

Ps: Another pit need to step, for want to recover data from RDB, at the same time the AOF switch is also open, has not been able to recover normally, because each time will take the data from AOF (if temporarily shut down aof, normal recovery) this time need to stop Redis, and then close aof, Copy the RDB to the appropriate directory, start Redis after the hot modify configuration Parameters Redis Config Set appendonly Yes, a aof file for the current memory data is automatically generated, then stop Redis again, open the AOF configuration, start the data here and start up normally ,

RDB: Periodic persistence of data in Redis, every moment of persistence is a snapshot of the full amount of data, less impact on Redis performance, fast exception recovery based on RDB

AOF: Writes to a log file in append-only mode, the entire data set can be rebuilt by replaying the write instruction in the AOF log at the time of the Redis restart (in fact, each write log data will go first to the Linux OS Cache data to disk) has a certain impact on redis, to ensure the integrity of the data, redis through the rewrite mechanism to ensure that the AoF file is not too large, based on memory data and to achieve appropriate command reconstruction

II: Redis Cluster

Replication

One master many from the schema, the master node is responsible for writing, and the data is synchronized to other Salve nodes (asynchronous execution) from the node is responsible for reading, is mainly used to do read and write separation of the horizontal expansion of the architecture, the master node data must be persisted, otherwise when the master outage restart memory data is emptied, Then the empty data is copied to the slave, causing all data loss

Sentinal Sentinel

Sentinel is an important component in the Redis cluster architecture that monitors the Redis master and slave processes to work properly, and when a Redis instance fails, can send an alarm to the administrator when the master node outage is automatically transferred to slave On node

The most specific of the first two architectures is that the data for each node is the same and cannot access massive amounts of data, so the Sentinel cluster is used in a way that does not have much data.

Redis Cluster

Redis cluster supports multiple master node, each master node can mount multiple slave node, if master hangs, will automatically switch one of the corresponding slave to master, note that Redis Cluster under the slave node is mainly used to do high-availability, failure of the main standby switch, if it is necessary to slave to provide the ability to read, modify the configuration can also be implemented (also need to modify the Jedis source code to support the case of read and write separation operations), Slave nodes can be automatically migrated (so that the master node has an average of slave nodes) overload of the entire architecture redundant slave to ensure higher system availability

Tomcat JVM heap Memory cache, mainly anti-Redis large-scale disaster, if Redis has a large-scale outage, resulting in nginx large amount of traffic directly to the data production server, then the last Tomcat heap memory cache can also handle partial requests, Avoid all requests flowing directly to the database;

"Cache Data Update Policy"

High-timeliness cache data, but when changes occur, directly take the database and Redis cache dual write scheme, improve the cache timeliness

For less time-sensitive data, when a change occurs, an MQ asynchronous notification is used to listen for MQ messages through the data production service, and then asynchronously pulls the service data to update the Tomcat JVM and the Redis cache. For Nginx local cache, you can pull new data from Redis and update to Nginx Local

"Database and Redis cache double-write inconsistent issues"

asynchronous serialization of database and cache update read and write operations, when the new data, according to the unique identity of the data, the update data operations are routed to a JVM internal queue, a queue corresponding to a worker thread, thread serial then in the queue of one line of execution, when the execution of the queue of update data operations, Delete the cache, and then go to update the database, there is no time to complete the update when a read request, read the empty cache, you can first send the cache after the updated request to the queue after the queue, the backlog of queues, and then synchronize to wait for the cache update to complete,

"Cache Avalanche Solution"

Redis cluster burst, cache service a large number of requests for redis waiting, occupy resources, and then the cache service a large number of requests into the source service to query the DB, so that the db pressure is too large until the crash, at this time the request for the source is also a lot of resources waiting to occupy Caching services A large number of resources all consumed in the access to Redis and the original service no fruit, and finally make itself unable to provide services, the end of the entire site crash;

Pre-Solution: Set up a high-availability architecture Redis cluster cluster, master-slave architecture, one main multi-slave, and preferably use a dual-room deployment cluster

Solution: Deploy a layer of ehcache cache that can withstand part of the pressure throughout the Redis cluster, isolate the access to Redis cluster, avoid requests for all resources, and fail to access the Redis cluster. Stream-limiting and resource isolation for source service access

After-the-fact solution: Redis data Backup can be restored directly, restart Redis, Redis data is completely lost or the data is too old, can quickly cache warm up, then let Redis restart, and finally due to resource isolation half-open policy found Redis back to normal, Then all requests will be automatically restored.

"Nginx cache failure results in Redis pressure multiplier"

At this time, we can set the cache expiration date when the Nginx local cache data, to avoid the same time cache failure causes a large number of requests directly into Redis

Nginx+redis+ehcache large high-concurrency high-availability three-tier architecture summary

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.