A summary of high availability of web sites under high concurrency

Source: Internet
Author: User
Tags nginx reverse proxy redis cluster

This article is the work encountered in the site of high concurrency problems and related solutions to summarize the ideas and methods, more fragmented, and will be in the follow-up process of continuous enrichment, there is the wrong place, please correct me.

1, front and rear end separation

After the front-end separation, you can use a more lightweight web container to deploy static resources, such as Nginx, but also to the static resources Cdn acceleration, and for marketing or activity pages, you can use the Content management platform, real-time change the page, rapid iteration. After the service comes out, it focuses on the business process and provides convenience for subsequent service splits.

2. Split service according to business area

As the volume of business increases, the monolithic architecture can no longer meet high concurrency scenarios, splitting the service end into multiple independent services according to the business domain. After the service has been split, these services are deployed independently, independently extended, and fully decoupled between services.

After splitting the service is usually stateless, session management and so on usually to the gateway layer, or unified SSO services, business services to eliminate the complexity of session management, improve service efficiency;

The split service supports distributed multi-instance deployments to scale on demand, enabling rapid, on-demand scale-up, such as marketing campaigns, to increase the availability of your site as you respond to sudden bursts of traffic.

3. Cluster deployment, load balancing

The split service supports distributed deployment, which can be used to further enhance site availability using cluster deployment and load balancing. Cluster deployment mode, multiple instances to share traffic, and flexibility to increase or decrease member instances, load balancing mechanism to ensure that some member instances fail, can quickly switch traffic to other normal member instances, thereby increasing the availability of the site.

There are usually two ways to load balance:

    • Hardware load--f5 7-tier or 4-layer network proxy
    • Software load--nginx Reverse proxy

Load balancing algorithms usually include: polling, random, source address hashing, weighted polling, weighted random, and minimum join

Refer to: 6 load Balancing algorithms

4. Use caching to improve query service performance

In the case of multi-write less, it is very suitable to use the cache to improve the performance of the query service, reduce the pressure on the database. Redis cluster is often used as a cache server, and the Redis cluseter mechanism can guarantee high availability of the cache service, cache service failures in a timely manner, and obtain information from the database.

The specific practice of Spring+mybatis+redis doing extended caching is given later.

5, database reading and writing separation, sub-Library sub-table

Business fast to the back, when the user in the Tens, the database will become the last bottleneck, first of all, according to the business scenario analysis, consider doing read and write separation. Write to the master library and read the Salve library. If the read-write separation does not meet the high concurrency requirements, it is necessary to consider the idea of using a sub-table for scale-out. The database only allows single table operations to be very friendly to the sub-table sub-Library, therefore, in the early development of the contract, business database operations must be a single table operation.

Read-Write separation, sub-database sub-table technology has not been tried, but now the system design is in accordance with the single-table operation agreed, so the subsequent horizontal expansion will be more convenient.

The actual floor plan of the sub-database is difficult and trade-offs, and the key points need to take into account the expansion of the sharding number, under the study of space.

6, the database as little as possible to use transactions

In general, database-level transactions are used to ensure atomicity and data consistency for business operations, but the use of database transactions can lead to performance losses, so that, unless they are not used, they should be used sparingly. In practice, this article can be done.

So when can I use it? Simply speaking, a pre-database operation can be repeated without transaction control. such as user registration scenario, will first insert ' Pass credential ' table, and then insert '  User ' table, without transaction control, we can put credential operations in front of the user table operation, even if the credential succeeds, the user insert fails, the data consistency has no effect, because the user re-registration will regenerate a userid, can re-insert After credential, insert the user table. The impact on users is also consistent with the effect of transaction control. The bad effect is that more than one credential record is generated. The insert ' Pass credential ' table here is a pre-database operation, but not a critical database operation, because the final effect of the registration is that a userid must be inserted in the user table, and subsequent logons will be validated by obtaining user information from the Users table.

When does it have to be used? In the order/payment business system, the same order does not allow duplicate support this is the bottom line, so in the order payment scenario, need to add transaction control, to avoid the recurrence of the order payment occurs.

7. Asynchronous non-critical business

Non-critical business refers to the user does not care about the business links, such as registration is completed, send marketing SMS notification to the user, the user does not care whether the marketing SMS can be delivered in time. For such non-critical business segments, asynchronous processing can be done on the design.

Common methods of processing are:

    • Simple processing-in the flow of business threads outside, another thread/line pool out of such asynchronous tasks, especially the use of the main thread pool, usually to limit both the thread size and the queue size, otherwise, in some exceptional cases, asynchronous threads are all blocked, causing the task to accumulate, Affects the execution of the primary business thread.
    • Better way-using queue as a transit point for tasks, there is a background job for offline processing of tasks, there are other advantages, such as job as infrastructure exists, can effectively use computing resources.

Asynchrony needs to consider whether a business scenario is a critical scenario and whether the task can tolerate loss. If you do not tolerate loss, use the queue intelligently and persist the message.

8. Using Asynchronous IO

Tomcat's use of NIO to improve service performance is also useful for improving Web site usability.

NIO vs BIO vs AIO need to be re-researched and later mended.

Todo......

A summary of high availability of web sites under high concurrency

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.