What are the common Web server architectures?

Source: Internet
Author: User
Tags rabbitmq

Recently took the time to tidy up a performance test parameter guidance table, popular point is that those in the Linux Vmstat,top,iostat command often see the parameters of how to see, we often need them in the test work. Personal understanding of the performance testing work is: From the foundation to start, learning new architecture (Big data, queue, cache technology, cluster, etc.), only with these foundation you can be handy for performance testing, holding a tool to do this thing? Then I can only say that you are in a wrong direction.

First throw this picture to everyone, and then share this issue of the selection, this issue of the selection is about the site server architecture.

Comparative analysis of common performance indexes

Problem
What are the common Web server architectures?

Respondents ' picks

Fang, it's an ideal country.

Simple

The following architectures are performed with the assumption that the Linux kernel has been optimized

Beginner: (Stand-alone mode)

Hypothetical configuration: (Dual core 2.0GHZ,4GB RAM,SSD)

Basic framework: Apache (PHP) + Mysql/iis + MSSQL
(most basic framework for handling general access requests)

Step 1: Replace Apache with Nginx, and add the cache layer "database speed is the biggest bottleneck"
Nginx (PHP) + Memcache + Mysql
(You have the ability to handle small traffic at this time)

Advanced 2: With the increase in traffic, the first problem comes: CGI cannot match the high IO performance of nginx, this time can write extension to replace the script program to improve performance, C extension is a good way, but people prefer to complete the task in a simple scripting language, Taobao team Open Source A Nginx_lua module, you can use LUA to write Nginx extension, this time can handle the concurrency has surpassed the advanced 11 grade.
Nginx (Nginx_lua or C) + Memcache + Mysql
(At this time processing a simultaneous online three thousand or four thousand people no problem)

Advanced 3: With the increase of users, MySQL writing speed has become another big bottleneck, read memcache do cache, but the write is directly facing MySQL, performance is greatly hindered, at this time, to add a layer of write cache between Nginx and MySQL, the queue system on the appearance, Take RABBITMQ as an example, all write operations are thrown into the rabbit's stomach, and then the back of the buttocks to write a pick-up program, a strip of pull out and then write to MySQL. While the RABBITMQ write efficiency is the N-fold of MySQL, this time the structure of the processing power of a class.
|--write-->rabbitmq--–
Nginx (LUA or c)-–| ——— >mysql
|--read-->memcache--–

(At this time the concurrent throughput capacity can be handled by the million people online)

Intermediate article: (Divide and conquer)

At this point we are in the single-machine optimization has reached the limit, the next cluster to show the role.

Database: The database is always the weakest throughput in the whole process, the most common method is sharding.
Sharding can be divided in a variety of ways, no stereotypes, see the situation. Can be divided by User ID section, according to read and write points, and so on, available reference software: MySQL proxy (works like LVS)

Cache article: Memcache is generally used to build memcache pool, spread the cache across multiple memcache nodes, how to distribute the cache data evenly in each node, the general use of the sequence number of nodes, and then hash to the rest of the corresponding to the node up. This allows for a more homogeneous dispersion, but one of the deadly points is that if the number of nodes increases or decreases, there will be almost 80% of data migrations, and the solution we'll mention in the advanced article.

Web Server Chapter: the construction of the Web server cluster, the most common is the LVS (Memcache pool can also be set up), the core of LVS is the dispatch node, the dispatch node is responsible for the flow through the algorithm dispersed to each node, because the scheduling consumes less resources, So can produce a high throughput rate, the number of background nodes can be arbitrarily deleted, but this method is the problem if the scheduling node is hung, the entire cluster is hung, the solution we in the high-level article.
Method 2: See Haproxy–the Reliable, High performance tcp/http Load Balancer

Advanced article: (High availability + highly scalable cluster)

Single point scheduling fault resolution:
The benefits of clustering are obvious, but one drawback is that single-node scheduling, if the node fails, the entire cluster can not service, the solution to this, we use keepalived to solve. Keepalived for Linux
Keepalived is based on the VRRP Protocol (VRRP Protocol Introduction), be sure to understand the VRRP protocol before you configure it.
Keepalived can make multiple devices virtual one IP, and automatically between the fault node and the standby node to achieve failover switch. In this way we configure two units of the LVS scheduling node, and then configure the keepalived can be done after the LVS scheduling node failure, automatically switch to the standby scheduling node. (Same for MySQL)

Memcache Cluster Expansion Solution:
Memcache because we generally use the hash after divided by the number of nodes to take the remainder, and then allocated to the corresponding node, if the number of nodes changes, the previous cache data will not be hit basically.
WORKAROUND: Consistent hashing introduction: consistent hashing

Consistent hashing the approximate idea is that the value after the hash is guaranteed to 0 ~ (2^32)-1 of the values, and then the series of numbers mapped to an imaginary circle.

Reprint: http://www.diggerplus.org/archives/2233

What are the common Web server architectures?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.