On the evolution process of Web site architecture __web

Source: Internet
Author: User
Tags database join failover http redirect
PrefaceWe take javaweb as an example to build a simple electrical business system to see how the system can evolve step by step. The function of the system: User module: User registration and Management Commodity module: Product presentation and Management trading module: Creating Transactions and managing

Stage One, stand-alone build Web siteAt the beginning of the site, we often run all our programs and software on a single machine. At this point we use a container, such as Tomcat, Jetty, Jboos, and then use the Jsp/servlet technology directly, or use some open source frameworks such as Maven+spring+struct+hibernate, maven+spring+   Springmvc+mybatis finally select a database management system to store data, such as MySQL, SQL Server, Oracle, and then connect and manipulate the database through JDBC. All of the above software is loaded on the same machine, the application ran up, is also a small system. At this point the system results are as follows:Phase II, Application server and database separationWith the site on the line, the number of visits gradually increased, server load slowly increased, in the server is not overloaded, we should be ready to enhance the load capacity of the site.   If our code level has been difficult to optimize, without improving the performance of a single machine, it is a good way to increase the machine, not only can effectively improve the system load capacity, but also high cost performance. What is the added machine for?   At this point we can split the database, the Web server, so that not only improve the load capacity of a single machine, but also improve the disaster-tolerant capability. The architecture of the application server is separated from the database as shown in the following illustration:Phase III, Application server clusterAs the number of accesses continues to increase, a single application server is unable to meet the requirements. Under the assumption that the database server is not under pressure, we can change the application server from one to two or more, and spread the user's request to different servers, thus increasing the load capacity. There is no direct interaction between multiple application servers, and they all rely on the database to provide services externally. The famous failover software has keepalived,keepalived is a similar to Layer3, 4, 7 switch system software, he is not a specific software failover of the exclusive, but can be applied to a variety of software products.   Keepalived with the Ipvsadm can also do load balancing, is a artifact. We have added an application server for example, the added system structure is as follows:as the system evolves here, there will be four questions: The user's request by WHO to forward to the specific application server what forwarding algorithm application server How to return the user's request if each access to the server is not the same, then how to maintain the consistency of session

  let's look at the solution to the problem :

1, the first problem is the load balancing problem, there are generally 5 kinds of solutions:

1,http redirection . HTTP redirection is the request forwarding of the application tier. The user's request has actually been to the HTTP redirect Load Balancing server, the server according to the algorithm required user redirection, after the user received the redirection request, again request the real cluster

Advantages: Simple.

Disadvantage: poor performance.

2,DNS domain name resolution load balancing . DNS domain name resolution load balancing is when a user requests a DNS server to obtain the corresponding IP address of the domain name, the DNS server directly gives the server IP after the load is balanced.

Advantages: To DNS, do not need us to maintain load balancing server.

Disadvantage: When an application server is hung up, unable to notify DNS in time, and the control of DNS load balance in the domain Name Service provider, the website can not do more improvement and more powerful management.

3, reverse proxy server . When the user's request arrives at the reverse proxy server (has reached the Web room), the reverse proxy server is forwarded to the specific server according to the algorithm. Commonly used Apache,nginx can act as a reverse proxy server.

Advantages: Simple deployment.

Disadvantage: Proxy server can become a bottleneck in performance, especially when uploading large files.

4,IP layer load balance . After the request reaches the load balancer, the load Balancer realizes the request forwarding by modifying the destination IP address of the request, so that the load is balanced.

Advantages: Better performance.

Disadvantage: The bandwidth of the load balancer becomes the bottleneck.

5, Data link layer load balance . After the request arrives at the load balancer, the load balancer achieves load balancing by modifying the requested MAC address, which, unlike IP load balancing, returns directly to the customer after the request has been made to the server. Without the need to go through the load balancer.

2, the second problem is the cluster scheduling algorithm problem, the common scheduling algorithm has 10 kinds.

1,RR polling scheduling algorithm . As the name suggests, polls the distribution request.

Advantages: Simple to implement

Disadvantage: Do not consider the processing power of each server

2,WRR weighted scheduling algorithm . We set weights for each server weight, load-balancing scheduler according to the weight of the dispatch server, the server is called the number of times with the weight of the proportional.

Advantages: Considering the different processing capabilities of the server

3,SH original address hash : Extract the User IP, according to the hash function to draw a key, and then according to the static mapping table, investigate the corresponding value, that is, target server IP. If the target machine is overloaded, it returns empty.

4,DH target address hash : Ditto, only now extract is the destination address IP to do the hash.

Advantages: Both of the above algorithms can achieve the same user access to the same server.

5,LC Minimum Connection . Prioritize requests to servers with fewer connections.

Advantages: Make the load of each server in the cluster more evenly.

6,WLC weighted minimum connection . On the basis of the LC, add weights for each server. The algorithm is: (Active connection number *256+ inactive connection number) The weight, calculate the value of small server priority is selected.

Benefits: You can allocate requests based on the capabilities of the server.

7,sed the shortest expected delay . In fact, SED is similar to WLC, except that the number of inactive connections is not considered. The algorithm is: (Active connection number +1) *256÷ weight, the same computed value of a small server priority is selected.

8,NQ never line up. An improved SED algorithm. We think about what circumstances can "never queue", that is, the number of connections to the server is 0, then if there is a server connection number of 0, the equalizer directly forwarded the request to it, without having to go through the SED calculation.

9,LBLC based on the least local connection . The equalizer, based on the IP address of the requested destination, finds the server to which the IP address is recently used, forwards the request, and uses the least connection number algorithm if the server is overloaded.

10,LBLCR with replication based on the least local connection . The equalizer, based on the requested destination IP address, finds the most recently used "Server group " for the IP address, noting that it is not a specific server, and then uses the minimum number of connections to pick out a specific server from the group and forward the request. If the server overload, then according to the minimum number of connections algorithm, in the cluster of non- server group servers, find a server out, join the server group, and then forward the request.

3, the third problem is cluster mode problem, generally 3 kinds of solutions:

1,NAT: The load balancer receives the user's request, forwards to the specific server, the server handles the request to return to the equalizer, the equalizer returns to the user again.

2,DR: Load balancer receives the user's request, forwards to the specific server, the server comes out to play the request to return directly to the user. Need system to support IP Tunneling protocol, it is difficult to cross platform.

3,TUN: ditto, but no IP tunneling protocol, Cross-platform Good, most of the system can support.

4, the fourth question is the session question, generally has 4 kinds of solutions:

1, sessionSticky. Session Sticky is to put the same user in a conversation in a request, are assigned to a fixed server, so we do not need to solve the problem of the server across servers, the common algorithm has the Ip_hash method, namely the two hashing algorithm mentioned above.

Advantages: Simple to implement.

Disadvantage: The session disappears when the application server restarts.

2, SessionReplication. Session replication is the replication session in the cluster so that each server holds session data for all users.

Advantages: Reduce load balancing server pressure, do not need to implement ip_hasp algorithm to forward requests.

Disadvantage: When replicating the bandwidth overhead, the large amount of access to the session memory large and waste.

3, sessiondata Central storage : Session data Centralized storage is the use of database to store session data, implementation of the session and application server decoupling.

Advantages: Compared with the session replication, the pressure on broadband and memory is much reduced among the clusters.

Disadvantage: You need to maintain the database where the session is stored.

4,cookieBase: Cookie Base is the session exists in the cookie, there is a browser to tell the application server my session is what, the same implementation of the session and application server decoupling.

Advantages: Simple to implement, basic maintenance-free.

Disadvantages: Cookie length limit, low security, broadband consumption.

  

  It is worth mentioning that:

Nginx currently supports load balancing algorithms with WRR, SH (support for consistent hashing), fair (I think it boils down to LC). But as a equalizer, nginx can also be used as a static resource server.

Keepalived+ipvsadm is more powerful, currently supported algorithms are: RR, WRR, LC, WLC, LBLC, SH, dh

Keepalived support cluster modes are: NAT, DR, TUN

The nginx itself does not provide a solution for session synchronization, while Apache provides support for session sharing.

Well, after solving the above problems, the structure of the system is as follows :

Phase IV, database read and write separation above we always assume that the database load is normal, but with the increase in the amount of traffic, the database load is also slowly increasing. Then someone may immediately think of the same as the application server, the database of a two load balance can be. But for the database, it's not that simple. If we simply split the database into two, then the request for the database, respectively, load to a machine and B machine, then it is obvious that the two database data is not unified problem.   So in this case, we can first consider the use of read-write separation of the way. The structure of the database system after read-write separation is as follows: This structure changes will also bring two problems : Data synchronization between master and slave database problem solving solution to the choice problem of data source: We can use MySQL's own master+slave side To achieve master-slave replication. Third-party database middleware is adopted, such as Mycat. Mycat is developed from the Cobar, and Cobar is Ali Open source database middleware, and then stopped developing. Mycat is a good domestic MySQL open source database sub-table middleware.

phase Five, use search engine to ease the pressure of reading library

Database do read the library, often on the fuzzy search powerless, even do the separation of read and write, this problem has not been resolved. For example, the transaction site we have given, the published products are stored in the database, the most commonly used function is to find goods, especially according to the title of the product to find the corresponding goods. For this kind of requirement, we are usually implemented by like function, but this way is very expensive. At this point we can use the search engine's inverted index to complete. Search engine has the following advantages : it can greatly improve the query speed.

the introduction of the search engine will also bring the following costs : a large number of maintenance work, we need to implement the index of the construction process, the design of a full/incremental build way to meet the real-time and real-time query requirements. Need to maintain search engine cluster

Search engine does not replace the database, he solved some scenes of the "read" problem, whether to introduce search engines, need to consider the overall system needs. The system structure after the introduction of the search engine is as follows:

phase six, cache to ease the pressure of reading library

1, the background of the application layer and the database layer cache with the increase in the number of visits, the gradual emergence of many users to access the same part of the situation, for these more popular content, it is not necessary to read every time from the database.   We can use caching techniques, such as using Google's Open-source caching technology guava or using Memcacahe as the cache for the application layer, or using Redis as the cache for the database layer. In addition, in some scenarios, relational database is not very suitable, for example, I want to do a "daily input password Error limit" function, the idea is probably in the user login, if the login error, then record the user's IP and error times, then this data to put where. If put in memory, then obviously will occupy too much content, if put in the relational database, then both build database table, resume corresponding Java bean, write SQL and so on. The analysis of the data we want to store is nothing more than key:value data like {Ip:errornumber}.     For this kind of data, we can use NoSQL database to replace the traditional relational database. 2, the page cache In addition to the data cache, as well as page caching.   For example, use HTML5 localstroage or cookies. Advantages : Reduce the pressure of the database to greatly improve the access speed

disadvantage : the need to maintain cache servers increases the complexity of coding

  It is worth mentioning that:

The scheduling algorithms for a cached cluster are different from the application servers and databases mentioned above. It is best to use the consistent hashing algorithm to increase the hit rate. This will not start to speak, interested in the relevant information can be consulted.

Add cache structure: phase VII, database horizontal split and vertical split

Our website has evolved to the present, the transaction, the commodity, the user's data are still in the same database. Although the use of increased caching, read and write separation of the way, but as the database pressure continues to increase, the database bottleneck is more and more prominent, at this time, we can have data vertical split and horizontal split two choices.

  7.1. Vertical data Splitting

Vertical split means to separate the database of different business data in different databases, combined with the present example, is the transaction, merchandise, user data separation. advantages : Solve the original put all the business in a database of pressure problems. More optimization can be done according to the characteristics of the business

disadvantage : Need to maintain multiple databases

problem : You need to consider the original cross-business transaction cross-database join Solution : We should try to avoid cross database things in the application layer, and try to control them in the code if you want to cross the database. We can use a third party to solve, as mentioned above Mycat,mycat provides a rich cross library join solution, the details can refer to the MYCAT official documentation.

  the structure after the vertical split is as follows :

  7.2, data horizontal split

A horizontal split of data splits the data in the same table into two or more databases. The reason that the data is split horizontally is because the amount of data in a business or the amount of updates reaches the bottleneck of a single database, and you can split the table into two or more databases. Advantages : If we can support the above problem, then we will be able to very good data volume and write volume growth.

Problem: Access to user Information application system needs to solve the problem of SQL routing, because the user information is now divided into two databases, the need for data operations to understand the need to operate where the data. Primary key processing is also different, such as the original from the Add field, now can not simply continue to use.   If you need paging, you are in trouble. Solution to the problem: we can still solve the third-party middleware, such as Mycat. Mycat can parse our SQL through the SQL Parsing module, and then forward the request to a specific database according to our configuration. We can guarantee a unique or custom ID solution through the UUID. Mycat also provides a wealth of paging query, such as first from each database page query, and then merge the data to do a paging query and so on.

  the structure after the data is split horizontally :

phase VIII, application split

  8.1. Split Application

With the development of business, more and more business, the application is more and more big. We need to think about how to avoid making applications more bloated. This requires the application to be disassembled, from one application to two or more. Or in our example above, we can split the user, the merchandise, the transaction. Become "User, commodity" and "User, transaction" two subsystems.

Structure after split:

problem : After this split, there may be some of the same code, such as user-related code, goods and transactions require user information, so both systems retain similar code to manipulate user information.     How to ensure that the code can be reused is a problem that needs to be solved. Solve the problem: By taking the route of service to solve

  8.2, take the road of service

In order to solve the problems arising from the above split application, we split the public service into a service mode, called SOA.

Adopt the system structure after service:

Advantages : The same code will not be scattered in different applications, these implementations are placed in the various service centers, so that the code is better maintained. We put the database interaction in the various service centers, so that "front-end" Web applications more focus on interaction with the browser work.

Question: How to make a remote service invocation solution: We can solve the problem by introducing the message middleware below

phase IX, the introduction of message middleware as the site continues to develop, our system may appear in different language development of the child modules and deployed in different platforms subsystem. At this point we need a platform to deliver reliable, platform-and language-independent data, and can make the load balance transparent, can collect and analyze the call in the process, infer the site's access rate of growth and so on a series of requirements, the site should be how to make predictions.   Open source Message middleware has Ali's Dubbo, can match with Google Open Source distributed program Coordination Service zookeeper implement server registration and discovery. Structure after introduction of message middleware: 10, summary

The above evolution process is just an example, not suitable for all sites, the actual site evolution process and its own business and different problems have a close relationship, there is no fixed pattern. Only careful analysis and continuous inquiry can find the appropriate structure of their website.

  

This article has anything to say wrong place, hoped everybody pointed out, lets me correct to come over, thanks.

Reference:

"Large Web site Technology architecture: Core Principles and Case studies"--Li Zhihui

"Large web system and Java Middleware Practice"--Zeng Xianjie

"MySQL performance Tuning and architecture Design"--Jenzhiang

"Keepalived Authority Guide"

"Mycat Authority Guide"

"Dubbo User's Guide"

Network

Os


Original address: Http://www.cnblogs.com/xiaoMzjm/p/5223799.html#!comments


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.