How to read the system architecture diagram of a Web service

Source: Internet
Author: User

One of the most important features of Web services is the large amount of traffic, data, and only one server must be difficult to support large-scale services. Therefore, we often see some of the following terms, which teach people to understand:

*: System architecture, physical architecture, Web services Infrastructure

*: Application Server

*: Database server

*: Index Server

*: Reverse proxy Server

*: Cache server

*: Distributed, scalable

*:CPU load, IO load

If you do not understand, then this article is a good start for you, about the Web services architecture, there are several good articles in front of the reference to read---Large-scale Web site architecture Evolution (UP), the evolution of large-scale web site (bottom), the soul of large Web site-performance (please poke me).

The main objective of this article is to read the following illustration:

CPU load and I/O load

We start with CPU and IO. A typical Web service is the Web service-the user initiates a request to the server through a browser, and the server extracts the data from the database, processing returns the HTML page to the user.

The 4 Arrows "<-" in the server need to consume the CPU compute resources of the servers, while the data obtained from the database consumes the IO resources. When the number of users, the number of requests increased, the server CPU resources (IO resource load also increased), when the amount of data stored up, the server's IO resources are also urgent.

For example, a single server can handle 3,000 requests per minute (PV, Page View), then the monthly processing of 1 million PV, more than this number of servers will not hold up, each request needs to extract data from the file system, Because the time required to read the disk is 100000-1000000 times the memory, the number of requests per minute more data extraction speed must not keep up, the database is hung.

Scalability

How to deal with the scale to increase service demand gradually? This requires the scalability of your system:

Scale-out: scale-out is also called distributed, and a server can't stand me. But reality is far more complex than ideals.

Vertical expansion: Vertical expansion is a financial high-handsome or enterprise software is often used by the method, because the server price and performance is not directly proportional to the performance to a certain extent, the performance of each point to increase the need to invest more money-server performance of the marginal price is rising. For the internet's grassroots entrepreneurial team, this is obviously unacceptable.

Expansion of CPU capacity

CPU load dispersion is easier because the CPU's calculation does not have a dependency, that is, the result of the current request does not depend on the result of the last request. The stateless of the HTTP protocol is a good example. This CPU can not hold, I directly clone a few completely together, and the cloned service is generally referred to as the application server.

The boundaries between the application server and the Web server are not very clear. The Web server is responsible for receiving requests sent by the user and returning the resource objects to the user, while the application server is responsible for generating the resource object (such as invoking a CGI script) by computing.

So the CPU load problem is solved, and our architecture becomes this way.

Expansion of I/O capabilities

Memory reads are much faster than disks, based on the principle of the operating system cache (caching), the basic idea for us to improve the speed of data reading is that increasing the memory size can significantly reduce the IO load, which is to replace your server with larger, more memory chips. The basic policy-when the operating system's cache cannot be processed, consider the distribution further. The essence of IO load dispersion is the dispersion of inexpensive, small-capacity memory.

The spread of IO load can be much more difficult than the CPU, because of the problem of data synchronization, we do not discuss the overall data replication and redundancy between database servers. Since the amount of data is too large to fit into one server's memory, we split the data-data segmentation (data compression can also achieve a certain effect).

Web Service requests are the existence of access patterns, such as crawler and ordinary user access (crawlers will request a long-ago page, and ordinary users mostly visit the current popular page), we have to deal with the user's popular resource objects on a server, should be the crawler's resource objects on another.

Even if there is no access mode, we can do this by partitioning (partitioning), which is table partitioning. For example, now there is a User ID table in the MySQL database, the number of users after the growth of the record is 1.3 billion, we sorted according to the size of the ID, divided into several ID tables, tens of millions of IDs per table, so that the size of a single table is the GB level-memory is sufficient.

In either case, we need an index server to do the mapping of the application server and the data server.

So now our architecture is:

The description of this article is up to here, and I believe it will be very easy for you to look back at the beginning of the system architecture diagram.

Turn from: Lighthouse Big Data

How to read the system architecture diagram of a Web service

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.