High concurrency Server Architecture & amp; Evolution of large website architecture

Source: Internet
Author: User
Tags database load balancing database sharding

High concurrency Server Architecture & Evolution of large website architecture

Three server requirements:

High Performance: rapid response to a large number of requests

High Availability: uninterrupted, automatic failover occurs, which is called fail over (Failover)

Scalability: Cross-machine communication (TCP)

In addition, any network system structure can be abstracted into a C/S architecture. We often say that the B/S mode is essentially a C/S architecture (the browser is regarded as a client ).

A typical server architecture:


Note: epoll is the most efficient network I/O in linux.

Because the server needs to efficiently process large concurrent connections, performance bottlenecks may occur in multiple locations. The following describes the causes and solutions for bottlenecks at different locations:

(1) database bottlenecks

[1] solution for exceeding the number of database connections: Add a DAL layer and use the queue wait (queue wait-data access layer). You can also use the connection pool (DAL Queue Service + connection pool) in this way, you do not need to reconnect and directly find resources from the pool.

 

[2] solutions that exceed the time limit:

(1) Place the business logic on the Application Server (Operating System Business Processing). The database logic should not be too complex, but must be used for auxiliary business processing.

(2) cache data, but it faces the following problems:

1. the cache timeliness. if timeout then is used to re-query the database (store hotspot data in the cache), this method is less real-time.

2. Once the database is updated, the system immediately notifies the front-end of cache updates. The updated cache is modified after Update, so the real-time performance is better. It may be difficult to implement.

If the memory is not enough, it is placed on an external disk and the cache page feed mechanism is used (similar to Memory Page feed in OS ).

All of the above can be implemented using open-source products: Nosql ---> (Anti-SQL)

Mainly stores non-relational data, key/value

There are also distributed open source software such as Redis and memached cache. These software can be deployed across servers, but if deployed on the application server, it is local, and access from other servers at the same level is very troublesome.

However, if the machines are configured separately and distributed cache is used, these are global and can be accessed by all application servers, which is convenient and quick.

 

[3] database read/write splitting

Database query operations are generally more frequent than write operations. We can perform load balancing on the database, write operations on the master server, read operations on the slave server, and read and write separation on the DAL, synchronization between master and slave servers is performed through the replication mechanism.

[4] data partitioning (Database sharding and table sharding)

Database sharding: Databases can distribute tables to different databases according to certain logic ---> vertical partitions (User tables and business tables)

More common table sharding-horizontal partitioning: divides the records in the table into different databases and 10 records into 10 databases. Similarly, this method can easily expand the horizontal structure.

(2) Application Server bottlenecks

Add a Task Server to perform load balancing on the task allocation of the application server. There are two options: active and passive:

(1) passive acceptance scheme of the Application Server:

The task server implements load balancing and exposes an interface. The Task Server can be treated as a client, and the application server can be considered as an http server.

The task server can monitor the load of the Application Server. The CPU, IO, concurrency, and memory are displayed on pages. After the information is queried, select the server with the lowest load (determined by the algorithm) to allocate the task.

(2) The application server takes the initiative to receive tasks from the Task Server for processing

After the application server completes the processing of its own tasks, the master moves to the task server to apply for a task.

(1) The method may cause unfairness. (2) The disadvantage is that if the application server processes different services, the programming logic of the task server may be very complicated.

Multiple Task servers can be configured to connect to each other through heartbeat ------> high availability (fail over mechanism ).

In this way, you only need to add servers to the bottleneck at any location (Database, cache, application server, and Task Server.

To efficiently program the server, we also need to know the four major killer servers:

(1) Data Copying ----> caching

(2) Environment switch -----> rational thread creation: Do I need multiple threads? Which is better? Single-core server (optimal programming efficiency using state machines, similar to process switching in OS)

Multithreading can give full play to the performance of multi-core servers. Pay attention to the overhead of Inter-thread switching.

(3) memory allocation ------> memory pool to reduce memory application to the Operating System

(4) Lock competition -------> use logic to minimize lock usage

The above information can be summarized into the following figure:

Next we will introduce the evolution of the large website architecture, which is basically the same as the above problem handling process:

[Step 1] Separation of web server and database

 


 

Apache/Nginx static processing (Front-End Server) JBoss/Tomat dynamic processing (back-end server)

[Step 2] cache Processing

1. browser cache reduces access to the website

2. Front-End Server Static Page cache reduces web Server requests

3. ESI is used for relatively static dynamic parts.

4. Local cache reduces queries to the database

[Step 3] web server cluster + read/write splitting

 

Server Load balancer: frontend Server Load balancer
DNS load balancing
On the DNS server, you can configure the same name for multiple different addresses and access the same name for different clients to obtain different addresses.
Reverse Proxy
Use the proxy server to send requests to internal servers so that the proxy server can evenly forward requests to one of multiple internal web servers to achieve load balancing. The standard proxy mode is used by a customer to access multiple external Web servers. This proxy mode is also called reverse proxy mode because multiple customers use it to access internal Web servers.
NAT-Based Load Balancing Technology LVS F5 Hardware load balancing
Application Server Load balancer
Database Load Balancing [Step 4] CDN, distributed cache, and database/table sharding

 

Currently, distributed cache solutions are popular: memcached, membase, redis, etc. Basically, the current NoSQL solution can be used for distributed cache solutions.

[Step 5] multi-data center + distributed storage and computing

 

Technical point: DFS)


 

Map/Reduce:

The file is too large to be loaded into the memory. The key-value data can be split. This is the map process (completed by multiple machines)

The merge process is called reduce. Map --> combine --> reduce, which is called distributed computing.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.