Recently read an article on the optimization and evolution of large-scale website architecture, and learn about some aspects of the structure of large web sites ... Article source http://www.cnblogs.com/hehaiyang/p/4458245.html
For large web sites, from hardware to software, programming languages, databases, webServer, firewalls and other areas have a high demand. For example, the portal site needs to address the high load and high concurrency problems faced by large web sites. This means that a large portal site to solve this problem at least to have a few conditions:
- Use high-performance servers
- High-performance database
- High-efficiency programming language
- High-performance Web container
The article offers the following solutions from the perspective of low cost, high performance and high expansion: (paste from the original text)
1. Static HTML
The most efficient, the least expensive is the pure static HTML page, so as far as possible to make our site pages using static pages to achieve, the simplest method is actually the most effective method. But for a lot of content and frequently updated sites, can not be all manually to implement, so there is a common information distribution system CMS, like we often visit the various portals of the news channel, and even their other channels, are through the information distribution system to manage and implement, Information Publishing system can achieve the simplest information input automatically generated static pages, but also with channel management, rights management, automatic capture and other functions, for a large web site, has a set of efficient, manageable CMS is essential.
In addition to the portal and the type of information publishing site, for the interactive requirements of the Community type site, as much as possible static is also to improve the performance of the necessary means, the community posts, articles in real-time static, there is a renewal of the time and re-static is a lot of use of the strategy, A hodgepodge like mop is the use of such strategies, such as the NetEase community.
At the same time, HTML static is also the use of some caching policies, for the system frequently using database queries but the content of small updates, you can consider the use of HTML static, such as forum public settings information, This information is currently the mainstream forum can be managed in the background and stored in the database, which is actually a lot of the foreground program calls, but the update frequency is very small, you can consider this part of the background update the time to static, so as to avoid a large number of database access requests.
2, Image server separation
For the Web server, whether it is Apache, IIS or other containers, the picture is the most resource-intensive, so we have to separate the picture and the page, which is basically a large web site will adopt the strategy, they have a separate picture server, and even many picture server. This architecture can reduce the server system pressure to provide page access requests, and can ensure that the system does not crash due to picture problems, on the application server and picture server, can be different configuration optimization, such as Apache in the configuration of contenttype can be as little as possible to support, LoadModule as little as possible to ensure higher system consumption and execution efficiency.
3. Database cluster and library table hash
Large Web sites have complex applications, which must use databases, and in the face of a large number of accesses, the bottleneck of the database can soon be revealed, when a database will soon be unable to meet the application, so we need to use the database cluster or library table hash.
In the database cluster, many databases have their own solutions, Oracle, Sybase and so on have a good solution, the common MySQL provided by the Master/slave is a similar scenario, you use what kind of db, refer to the corresponding solutions to implement.
The database cluster mentioned above is constrained by the DB type used in architecture, cost, and extensibility, so we need to consider improving the system architecture from the perspective of the application, and the library table hashing is the most common and effective solution. We install the business and application in the application or function module to separate the database, different modules corresponding to different databases or tables, and then according to a certain policy on a page or function of a smaller database hash, such as the user table, according to user ID for the table hash, This makes it possible to improve the performance of the system at a low cost and has a good scalability. Sohu Forum is the use of such a framework, the Forum users, settings, posts and other information database separation, and then to the post, the user in accordance with the plate and ID hash database and table, finally can be configured in the configuration file simple configuration will allow the system at any time to add a low-cost database to supplement the system performance.
4. Cache
The word cache has been touched by technology, and caches are used in many places. Caching in the Web site architecture and Web development is also very important. Here we first describe the two most basic caches. The advanced and distributed caches are described later.
Architecture cache, people familiar with Apache can know that Apache provides its own cache module, can also use the addition of Squid module for caching, both of which can effectively improve the access response of Apache.
Web application development cache, the memory cache provided on Linux is a common cache interface, can be used in web development, such as Java development can call MemoryCache to some data caching and communication sharing, some large communities use such a framework. In addition, in the use of web language development, all kinds of languages have their own cache modules and methods, PHP has pear cache module, Java more,. NET is not very familiar with, I believe there is certainly.
5. Mirror
Mirroring is often used by large web sites to improve performance and data security, the mirror technology can solve different network access providers and geographical user access speed differences, such as the difference between chinanet and edunet prompted a lot of websites in the education network to set up mirror site, the data for timing updates or real is updated. In terms of mirror detail technology, this is not too deep, there are many professional ready-made solution architectures and products to choose from. There are also inexpensive ways to implement software, such as the tools of Rsync on Linux.
6. Load Balancing
Load balancing will be the ultimate solution for large web sites to address high-load access and a large number of concurrent requests.
Load balancing technology has developed for many years, there are many professional service providers and products can be selected, I personally contacted a number of solutions, including two architecture can give you a reference.
Hardware four-layer switching
The fourth layer Exchange uses the header information of the third layer and fourth layer packets, according to the application interval to identify the business flow, the entire interval segment of the business flow distribution to the appropriate application server for processing. The fourth layer switch function is like a virtual IP, pointing to the physical server. It transmits services that comply with a variety of protocols, such as HTTP, FTP, NFS, Telnet, or other protocols. These operations are based on physical servers and require complex load balancing algorithms. In the IP world, the business type is determined by the terminal TCP or UDP port address, and the application interval in layer fourth switching is determined by the source and endpoint IP addresses, TCP, and UDP ports.
In the hardware four-layer switching product area, there are some well-known products to choose from, such as Alteon, F5, etc., these products are expensive, but value for money, can provide very good performance and very flexible management capabilities. Yahoo China at the beginning of nearly 2000 servers using three or four alteon to be done.
Software four-layer switching
When you know the principle of hardware layer four switch, the software four layer exchange based on the OSI model comes into being, so the solution achieves the same principle, but the performance is slightly worse. But to meet a certain amount of pressure or comfortable, some people say that the software implementation is actually more flexible, the ability to handle the full look at your configuration of the familiar ability.
Software four-layer switching we can use the common LVS on Linux to solve, LVs is Linux Virtual Server, he provides a real-time disaster response based on the Heart Line heartbeat solution, improve the system robustness, At the same time, the flexible virtual VIP configuration and management functions can meet a variety of application requirements, which is necessary for distributed systems.
A typical use of load balancing strategy is to build a squid cluster on the basis of software or hardware four-layer switching, which is adopted on many large Web sites including search engines, which have low cost, high performance and strong extensibility, and it is easy to add or subtract nodes to the architecture at any time. Such a structure I am ready to empty a special detail and discuss with you.
For large web sites, each of the previously mentioned methods may be used at the same time, I introduced here is relatively simple, the implementation of a lot of details of the process needs to be familiar with and experience, sometimes a very small squid parameter or Apache parameter settings, the impact on the system performance will be very large, I hope that we will discuss together to achieve the effect.
============= Large-scale website Architecture Design System Evolution! ================
There have been some articles about the evolution of large-scale web sites, such as LiveJournal and ebay, which are well worth referring to, but feel that they are talking more about the results of each evolution than on why they need to be evolved, Coupled with the recent feeling that a lot of students are difficult to understand why a website needs so complex technology, so there is the idea of writing this article, in this article will explain a common website developed into a large web site in the process of a more typical architecture evolution and need to master the knowledge system, Hope to be engaged in the Internet industry students a little preliminary concept,:), the text of the wrong place also ask you to give a little more advice, so that this article really play a starting effect.
Architecture Evolution First step: Physically separate webserver and databases
At first, because of some ideas, so on the internet to build a website, this time may even host is rented, but because this article we only focus on the evolution of the architecture, so it is assumed that this time is already hosting a host, and there is a certain bandwidth, this time due to the site has a certain characteristics, Attracted some people to visit, gradually you find the system pressure is getting higher and slower, and this time is more obvious is the database and application interaction, application problems, database is also prone to problems, and database problems, the application is also prone to problem, so entered the first stage of evolution: Physically separating the application and the database into two machines, this time there is nothing new in the technology, but you find that it really works, that the system is back to the previous response speed, and that it supports higher traffic, and does not have a mutual impact on the database and application.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
This step of architecture evolution has little requirement on the technical knowledge system.
Architecture Evolution Step Two: Increase page caching
Not long, with more and more people visiting, you find that the response speed and began to slow down, find the reason, found to access the database too many operations, resulting in fierce competition in data connection, so the response is slow, but the database connection can not open too much, or the database machine pressure will be very high, So consider using a caching mechanism to reduce the competition of database connection resources and the pressure of database reading, this time may choose to use squid and other similar mechanisms to the system in a relatively static page (for example, a two-day update of the page) cache (of course, can also be used to static pages of the scheme), So that the program can not be modified, will be able to reduce the pressure on the webserver and reduce the competition of database connection resources, OK, so began to use squid to do relatively static cache of the page.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
Front-end page caching technology, such as squid, if you want to use good words also have to grasp the implementation of squid and cache failure algorithm.
Architecture Evolution Step Three: Increase page fragment caching
Added squid to do the cache, the overall system speed is indeed improved, the pressure of the webserver began to decline, but with the increase in traffic, the discovery system began to change a little slower, in the taste of squid and other dynamic cache brought benefits, Starting to think about whether the relatively static parts of the dynamic pages are also cached, so consider using a page fragment caching strategy like ESI, OK, and start using ESI to do the caching of the relatively static fragment portion of the dynamic page.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
Page fragment caching technology, such as ESI, to use good words also need to master the implementation of ESI, and so on;
Architecture Evolution Step Fourth: Data caching
In the adoption of technology such as ESI once again improve the system's cache effect, the system pressure is really further reduced, but again, with the increase in traffic, the system will start to slow down, after looking, you may find that there are some duplication of information in the system, such as access to user information, This time began to consider whether this data can be cached, so that the data cache to local memory, after the change is complete, fully meet the expectations, the system's response speed has been restored, the database pressure has been reduced a lot.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
Caching techniques, including map data structures, caching algorithms, the implementation mechanism of the chosen framework itself.
Architecture Evolution Step Fifth: Increase webserver
Not long, found that with the increase in system access, Webserver machine pressure in the peak will rise to a relatively high, this time began to consider adding a webserver, which is also to solve the availability of the problem, to avoid a single webserver Down machine words can not use, after doing these considerations, decided to add a webserver, add a webserver, will encounter some problems, typical is: 1, how to make access to the two machines, This time usually consider the scheme is Apache's own load balancing scheme, or LVS such a software load Balancing scheme, 2, how to maintain the synchronization of state information, such as user session, this time will consider the scheme has written to the database, write storage, such as Cookie or synchronization session information mechanism, etc. 3, how to keep the data cache information synchronization, such as previously cached user data, etc., this time usually consider the mechanism of cache synchronization or distributed cache, 4, how to upload files these similar functions continue to normal, The mechanism that is usually considered at this time is to use a shared file system or storage, etc., after solving these problems, finally is to add webserver for two, the system finally restored to the previous speed.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
Load balancing technology (including but not limited to hardware load balancing, software load balancing, load algorithm, Linux forwarding Protocol, implementation details of selected technology, etc.), Master and standby technology (including but not limited to ARP spoofing, Linux heart-beat, etc.), State information or cache synchronization technology (including but not limited to cookie technology, UDP protocol, status information broadcast, implementation details of the selected cache synchronization technology, etc.), shared file technology (including but not limited to NFS, etc.), storage technology (including but not limited to storage devices, etc.).
Architecture Evolution Sixth Step: sub-Library
Enjoy a period of time the system visits the high-speed growth of happiness, the discovery system began to slow down, this is what the situation, after looking, found that the database write, update some of these operations database connection resource competition is very fierce, causing the system to slow down, how to do it, At this point, the option has a database cluster and sub-library policies, cluster aspects like some database support is not very good, so the sub-Library will become a more common strategy, sub-Library also means to modify the original program, a change to achieve the sub-Library, good, the goal reached, the system recovery even faster than before.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
This step is more need to do a reasonable division from the business to achieve the sub-Library, the specific technical details of no other requirements;
At the same time, with the increase of data volume and the sub-Library, the design, tuning and maintenance of the database need to do better, so the technology in these areas has put forward a very high demand.
Architecture Evolution Seventh Step: Sub-table, Dal and distributed cache with the continuous operation of the system, the data volume began to grow significantly, this time to find the library after the query will still be some slow, so according to the idea of sub-Library to do the work of the table, of course, this will inevitably require some changes to the program, Perhaps at this time will be found to apply their own to care about the rules of the sub-database, or some complex, so that the initiation can be added a common framework to achieve the data access of the database sub-table, this in the ebay architecture corresponds to the DAL, the evolution of the process takes a relatively long time, of course, It is also possible that this generic framework will wait until the table is done, and at this stage it may be found that the previous cache synchronization scheme problems, because the amount of data is too large, so it is not likely to present the cache locally, and then synchronize the way, the need to adopt a distributed cache scheme, so, is a survey and torture, Finally, a large number of data caches are transferred to the distributed cache.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
Sub-table More is also the division of business, the technology involved in the dynamic hash algorithm, consistent hash algorithm and so on;
The DAL involves more complex techniques, such as the management of database connections (timeouts, exceptions), the control of database operations (timeouts, exceptions), the encapsulation of sub-list rules, etc.
Architecture Evolution Step Eighth: add more webserver
After doing the work of the sub-Library, the pressure on the database has dropped to a relatively low, and began to watch the daily traffic surge of happy life, suddenly one day, found that the system's visit began to slow trend, this time first to view the database, pressure all normal, then view webserver, found that Apache blocked a lot of requests, and the application server for each request is also relatively fast, it seems that the number of requests is too high caused the need to wait, slow response, this is OK, generally speaking, this time will be some money, so add some webserver server, in this add Webserver server process, there may be several challenges: 1, Apache soft load or LVS software load can not afford a huge amount of web traffic (request connection number, network flow, etc.) scheduling, this time if the funding allows, the plan is to buy hardware load, such as F5, Netsclar, Athelon and the like, if the funds are not allowed, the plan is to apply the logic of a certain classification, and then dispersed to different soft load cluster; 2, some of the existing state information synchronization, file sharing and other scenarios may be bottlenecks, need to be improved, Perhaps this time will be written according to the requirements of the Web site business needs of the distributed file system, etc. after these work, began to enter a seemingly perfect era of unlimited expansion, when the site traffic increases, the solution is to constantly add webserver.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
At this point, as the number of machines continues to grow, the volume of data continues to grow, and the requirements for system availability are increasing, this time requires a deeper understanding of the technologies used and the need for more customized products based on the needs of the site.
Architecture Evolution Step nineth: Data read-write separation and inexpensive storage solutions
Suddenly one day, found this perfect era also to end, the database nightmare again appeared in the eyes, because of the addition of webserver too much, resulting in the database connection resources is not enough, and this time has been divided into a table, and began to analyze the database pressure state, May find the database read and write ratio is very high, this time usually think of the data read and write separation scheme, of course, the implementation of this scheme is not easy, in addition, may find some data stored in the database some waste, or too occupy the database resources, So the evolution of architecture that could be formed at this stage is to achieve a separation of data read and write, while writing some more inexpensive storage schemes, such as bigtable.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
Data read and write separation requirements of the database replication, standby and other strategies have in-depth grasp and understanding, at the same time will require a self-implemented technology;
The inexpensive storage scheme requires in-depth mastery and understanding of the file storage of the OS, and requires in-depth mastery of the implementation of the language in the file.
Architecture Evolution Step Tenth: Into the era of large-scale distributed applications and inexpensive server group Dream era
After the long and painful process above, and finally ushered in the perfect era, and constantly increase webserver can support more and more high traffic, for large sites, the importance of popularity is undoubtedly, with the popularity of more and more high, a variety of functional needs also began to explode the growth of sex, This time suddenly found that the original deployment of the Web application on the webserver is very large, when a number of teams began to change it, it is quite inconvenient, reusability is pretty bad, basically every team has done more or less duplication of things, and deployment and maintenance is also quite troublesome, Because the huge application package in the N machine to copy, start all need to spend a lot of time, the problem is not very good to check, and another worse situation is likely to be a bug in an application caused by the whole station is not available, there are other like tuning bad operation (because the application deployed on the machine to do everything, It is impossible to do targeted tuning) and other factors, according to such analysis, began to make a decision, the system according to the responsibility of the split, so a large distributed application was born, usually, this step takes a long time, because there will be a lot of challenges: 1, split into a distributed after the need to provide a high-performance, Stable communication framework, and need to support a variety of different communication and remote call mode; 2. It takes a long time to split a huge application, need to do business collation and system dependency control, etc. 3, how to operate (rely on management, health management, error tracking, tuning, Monitoring and alerting, etc.) good for this huge distributed application. After this step, the architecture of almost the system enters a relatively stable phase, but also can start to use a large number of inexpensive machines to support the huge amount of traffic and data, combined with this architecture and the experience of so many evolutionary processes to adopt a variety of other methods to support the increasing volume of traffic.
Look at the diagram of the system after the completion of this step:
This step involves these knowledge systems:
This step involves a lot of knowledge system, requires a deep understanding and mastery of communication, remote call, message mechanism and so on, the requirements are from the theory, hardware level, operating system level and the implementation of the language used have a clear understanding.
Operation and maintenance of this piece of knowledge system is also very much, in most cases need to master the distributed parallel Computing, reporting, monitoring technology and rule strategy and so on.
It is really not very laborious, the entire site architecture of the classic evolution of the process is similar to the above, of course, each step of the plan, the evolution of the steps may be different, in addition, because the site's business is different, there will be different professional and technical needs, this blog more from the perspective of architecture to explain the evolution process, of course , there are a lot of technology is not mentioned here, such as database cluster, data mining, search, etc., but in the real evolution of the process will also rely on such as upgrading hardware configuration, network environment, upgrading operating system, CDN image to support the larger traffic, so in the real development process there will be a lot of differences, Another big web site to do far more than these, but also like security, operation, operations, services, storage and so on.
Through this article I realized that it's really not easy to be a safe, high-performance good site.
Optimization and evolution of large-scale website architecture