cap-design principle and strategy summary for mass service [turn]_ frame

Source: Internet
Author: User
Tags serialization

The feature of Internet service is to face the massive user, how to provide the stable service for the massive level of the user. Here, some of these years of experience accumulation and peacetime contact some of the ideas to make a summary.
I. Principles
1. The CAP principle for Web services
Cap refers to three elements: consistency (consistency), availability (availability), partitioning tolerance (Partition tolerance). Cap principle refers to these three elements can only be achieved at the same time two points, can not be taken into account, for the massive level of service, generally this is a common memory of the benchmark guidelines.
The following is the definition of CAP in the Web services Cap: Consistency: You can refer to database consistency. Each read of the information needs to reflect the latest updated data. Availability: High availability means that each request can be successfully completed and subjected to a response data partition Latitude: This is the requirement of a fault-tolerant mechanism. A service needs to be partial error in the case, the part of the copied data partition can still support the operation of some services, can be simply understood as easy to add or subtract the machine to achieve higher scalability, that is, the so-called horizontal expansion capability.

For the mass-level distributed service design, the basic partitioning tolerance (Partition tolerance) is the first element, so we need to make trade-offs between consistency (consistency) and availability (availability), depending on the business situation. For some businesses, such as Alipay or tenpay, consistency is the first consideration, and even delay inconsistencies are unacceptable, at a time when availability is sacrificed to ensure consistency. For some applications, such as Taobao or pat transactions in the evaluation of information, the average user can accept the delay of consistency, this time can give priority to usability, and the final consistency to ensure data consistency, such as through some sort of reconciliation mechanism. For some applications, even consistency is not required, just to ensure that almost consistency, such as Q-zone in the farm game in the stealing dishes.
Depending on the business requirements we apply, choose the appropriate level of consistency to better ensure the partitioning tolerance and availability of the system.
2. Flexible available
For the mass-level distributed service design, we must realize that everything is unreliable, in the environment of unreliable environment to build a reliable application, one of the most important thing is to maintain the flexibility of the system.
1. Unreliable environment
We may have been accustomed to a remote service can not provide services, after running for a while webserver suddenly do not respond, the database as the load increased and then put on a SQL statement will collapse. However, hard drive, power off, fiber interruption, sounds incredible, but when a massive service needs tens of thousands of servers, dozens of data centers across the country, and the need to spread across the network of Telecom Netcom's education Network, everything that sounds incredible will turn out to be the norm. Everything is unreliable, and the only thing that is reliable is unreliable itself.
2) Division of service levels
We should be aware that in this unreliable environment to provide the perfect service, itself is a myth, if not impossible, but at least it is expensive, so when some problems occur (when the environment becomes unreliable), we have to make trade-offs, choose to provide users with the most concern of the service, While this service may sound damaging (or at least imperfect), it can meet most of the user's needs in some way. For example, when the network bandwidth can not provide users with the best experience and the expansion is not short-term to achieve, choose to reduce some of the less important service experience is a better choice.
In the design of mass Internet, the service is graded, and when the system becomes unreliable, it gives priority to the service.
3) Respond as soon as possible
The patience of Internet users is very limited, if a page needs more than 3 seconds to see, perhaps most users of the first choice is to turn off the browser. When building flexible, available Internet services, the response time is most often a priority. Still a word, the environment is unreliable, when we can not get the data from the remote service as soon as possible, when the database has been slow as a snail, perhaps when the background still in Chi Chi work, the user has already closed the page, processing the returned data is only a waste of expression, facing the Internet users, response is life.
Second, the strategy
How to provide a higher quality service for our applications, here are some summary of strategies used or observed in daily development:

1. Data sharding
The massive service corresponding also means the massive user and the massive user data, as we all know, even a powerful database, a powerful server, hundreds of billions of data on a single table is enough to make a simple SQL statement slow as a snail (even in millions, tens, if you don't take the appropriate strategy, Can not meet the service requirements, the general processing of tens of millions of billion-class data, we basically will think of is the data sharding, cut data into multiple datasets, Multiple tables scattered across multiple databases (such as cutting user data by user ID to 4 databases per database 100 tables in 400), because each table data is small enough to allow our SQL statements to execute quickly. And how to cut is actually related to the specific business strategy.
Of course, we have to realize that this data is not sharding, which means we need to make some compromises, for example, it may be difficult to query, table and sort across the table data set also become very difficult, while the database client program writing will become more complex, Ensuring data consistency can become difficult in some cases. Sharding is not a panacea, the choice of whether to sharding, how to sharding, for sharding How to exchange an approximate business description, this is the business design needs careful consideration.
2. Cache
Experience tells us that most of the bottlenecks in the system are concentrated on the io/database, and common sense tells us that the speed of the network and memory is higher than the io/database or even more than one order of magnitude. For the mass service, cache is basically a must option, distributed cache is a choice, according to our needs, we can choose memcached (not persistent), Memcachedb/tokyo tyrant (persistent), Even build their own cache platform.
In the use of cache, the following is the need to carefully consider the point: Select the appropriate cache distribution algorithm, basically we will find that the use of the model to determine the cache location is unreliable, because the removal of bad nodes or node expansion will let our cache hit rate in a short period of time to drop to freezing, Even if the system's load quickly grows and crashes in the short term, it's important to choose an appropriate distribution algorithm, such as stable, consistent hash cache management: Configuring separate cache for each application is usually not a good idea, we can choose on a large number of machines, as long as there is free memory, Run cache instance, the cache instance is divided into multiple groups, each group is a complete cache pool, and multiple applications share a cache pool reasonable serialization format: Use compact serialization scheme to store cache data, as little as possible to store redundant data, On the one hand, the storage utilization of cache can be extracted with maximum ability, on the other hand, capacity estimation can be more convenient. In addition, inevitably, as the business upgrades, stored data format may change, serialization also needs to pay attention to the issue of compatibility, so that the new format of the client can support the old data format. Capacity estimation: Before starting to run, make a capacity estimate for the capacity that your application might use to properly allocate the appropriate cache pool and provide a reference for possible capacity expansion. Capacity monitoring: Cache Hit rate, how cache storage saturation, the number of client socket connections and so on, the collection and monitoring of these data will be the business adjustment and capacity expansion provides data support. Choose on which layer cache, such as data layer cache, application layer cache and Web cache, the closer the data, the more common cache, the more easy to keep the cache data consistency, but the corresponding processing process is longer, and the closer the user, the cache's versatility is worse, The more difficult to ensure that the cache data consistency, but the faster the response, according to the characteristics of the business, select the appropriate cache layer is very important. In general, we will choose to make the coarse granularity, very few changes, the data is not sensitive to users (that is, to some extent, the error), and not for user-level data, in the most close to the user level cache, such as Picture cache, top x and other data, but some fine granularity, change relatively frequent, User-sensitive data or user-level data is placed close to the data, such as user profile, relationship chain, and so on.

3. Service Cluster
       for a large number of services, the horizontal expansion of the system is basically the first element, in my experience, the service cluster needs to consider the following factors: layering: The system is properly layered, The system resource requirements for different parts of a reasonable logical/physical layering, generally for simple business, client layer, webserver layer and DB layer is enough, for more complex business, may be divided into the client layer, webserver layer, business layer, The data access layer (business layer and data access layer tend to be physically on the same level), data storage layer (DB), too many hierarchies will cause the processing process to become longer, but the corresponding system flexibility and deployment will be stronger. function fine Granularity: A fine-grained division of functionality, and the use of separate process deployment, on the one hand can be more conducive to error isolation, on the other hand, when the function changes to avoid a function of other functions of the impact of the slow separation/priority separation of deployment: different services have different characteristics, Some services have fast access, slow access, slow access to services that can clog up the entire service, some have higher priority (such as services that are more concerned with a user like a hit), some have low priority (such as logging, email, etc.), Lower-priority services may block higher-priority services, deploying different features in a deployment to avoid interacting with each other is a common practice to deploy by data set: If each layer allows access to all of the next layer of service interfaces, there are a few serious drawbacks, as deployment services grow, You will find that the next layer must allow a very large number of sockets to join in, second, we may have to deploy different services in different data center (DC) of the different computer rooms, even on the g of the optical line, the shuttle flow of the machine room will also become unacceptable, the third is that each service node is full data capacity access, is not conducive to doing some effective internal optimization mechanism, four is only the code-level control of the grayscale distribution and deployment. When the scale of deployment reaches a certain order of magnitude, the data set is cut horizontally into multiple sets of services, each set of services for a specific dataset service, on deployment, each set of services can be deployed on the same independent data center (DC). Stateless: The state will cause endless problems for the lateral expansion of the system. For the situation of less state information, you can choose to put all the state information on the sender side, for the status of more information, you can consider maintaining a unified session center. Select the appropriate load balancer and load Balancing strategy: For example, on the L4 load-balanced LVS, L7 load-balanced nginx, or even dedicated load-balancing hardware F5 (L4), it is also important to select the appropriate load-balancing strategy for load balancers working on the L7, Generally let the user always load balanced to the sameA back-end server is a good way to

4. Grayscale Publishing
When the user of the system grows to a certain scale, a small feature of the release will also have a very large impact, this time, the function of the first to a small number of users open, and slowly expand to the full volume of users is a prudent approach, the use of gray-scale publishing will avoid the function of bugs to create large areas of error. The following are some common gray-control strategies: Whitelist Control: Only users on the whitelist are allowed access, generally used in the new functional alpha phase, only to the invited users to open the access threshold control: Common For example, Gmail came out of the initial invitation code, QQ farm at the beginning of the stage of the X-level yellow diamond access, Also generally used in the beta phase of new functions, slowly reduce the threshold by one step at a time, avoiding the collapse of the entire system at the beginning due to potential problems in the system or inability to support the capacity. Open by Data set: Generally used in mature function of new function development, avoid new function error produce large area influence

5. Design your own Communication protocol: binary protocol, up/down compatibility
As the system's steady run of traffic increases, it is slowly discovered that some seemingly working protocol performance becomes unacceptable, such as xml-based protocol XML-RPC, which will find that XML parsing and inclusions are becoming unacceptable, even if they are close to the binary Hessian protocol, The Extra field description information (as I understand it, the Hessian protocol is similar to the map structure, contains the name information of the field) and the text-based HTTP header will make the protocol inefficient. Maybe it's a good way to design a good binary, efficient internal communication protocol when it comes to the right time instead of the last resort. In my experience, designing your own communication protocols requires attention to the following points: protocol compactness, or sooner or later you'll be wasting space for you bitterly protocol extensibility, sooner or later will find that the old protocol format can not adapt to new business requirements, and in the early reservation is very important, basically, see some common specifications, Magic number (for an ineffective request can be quickly discarded), protocol version information, protocol header, protocol body, length of each part (including the object in the structure information) this information is not compatible with backward and upward compatibility: But when a feature is called on a large scale, a new version is released, It is basically unacceptable to have all the client upgrades at the same time, so you need to consider compatibility issues at Design point

      6. The design of your own application server
      things to the need to design their own communications protocols, the construction of their own application server also become logical, The following are common problems that you need to deal with when you develop your application server: Overload protection: When a part of a system is in trouble, the most common scenario is an explosion in the load of the entire system that results in an avalanche effect in the design application Server, you must pay attention to the overload protection of the system, when the request can be expected to be unable to process (such as queued load or long queue time), discard is a wise choice, TCP backlog parameter is a typical example. Frequency control: Even when other applications in the same system are invoked, a bad program may take up all the resources of the service, so the application side must take precautions, and frequency control is one of the more important asynchronous/unresponsive returns: For some businesses, just make sure the request is processed, The client does not care when processing, as long as the final guarantee of processing on the line, or even eventually did not deal with is not very serious things, such as mail, for this application, should quickly respond to avoid taking up valuable connection resources, and the request into the asynchronous processing queue slowly processing. Self-monitoring: Application server itself should have self-monitoring capabilities, such as performance data collection, query for external internal state (such as queuing, processing threads, waiting threads), such as early warning: When processing slow, too many queues, the occurrence of request discards, When there are too many concurrent requests, application server should have the ability to be alert to quickly process modularity, loose coupling between modules, and separation of mechanisms and policies: if you don't want to face all of the complexity all at once, or you don't want to change a little bit and have to return all the tests, Modularity is a good choice, the problem of module cutting, each module to maintain a reasonable degree of complexity, for example, for the application Server here, can be divided into request/management/response, protocol resolution, business processing, data acquisition, monitoring and warning modules. It is also noted that the interaction between blocks is loosely coupled, for example, between request receipt and business processing, which can be used to reduce coupling by blocking queue communication. Also note is the separation of mechanisms and policies, such as protocol changes, the way in which performance collection and alerting may change, and so on, prior mechanisms and strategies are separated, and the processing of policy changes becomes simpler.

7. Client
Many applications will serve as client to invoke other services, following are some of the issues that should be noted as a client: service unreliable: One of the things that a client should always remember is that remote services are always unreliable, so as a client they should pay attention to self-protection, Compromise timeout protection when remote services are not accessible: As mentioned above, the remote service is always unreliable, can never predict when the remote will respond, and may not even respond (such as remote host downtime), the requester should do a good job of timeout protection, such as for the host is not up to the situation, In a Linux environment, sometimes it will take a few minutes for the client to wait for the TCP layer to finally tell you that the service is not reachable. Concurrent/asynchronous: In order to speed up the response, for a lot of data can be obtained in parallel, we should always go in parallel to get, for some we can not control the synchronization interface-such as reading a database or synchronous read cache--Although not very perfect, but multithreading parallel to get is an available option, And for servers that use the application Server, the asynchronous client interface is critical, sending all requests out, using asynchronous IO settings to wait for return, or even further asynchronous anywhere, When the client and application server are consolidated together, the request is returned immediately after it is sent out, the thread/process resources are returned, and the callback is triggered for subsequent processing when the request response comes back in line with the condition.

8. Monitoring and warning
Basically, we are accustomed to the monitoring of various network devices or servers, for example, network traffic, IO, CPU, memory and other monitoring data, but in addition to these overall operational data, the application of fine-grained data also need to be monitored, service access pressure, how fast, performance bottlenecks in Where, Bandwidth is mainly what application accounted for, Java Virtual machine CPU consumption, how the memory footprint, the data will help us better understand the operation of the system, and the system optimization and expansion to provide data guidance.

      In addition to applying overall monitoring, specific business monitoring is also an option, such as checking regularly for each specific function point (URL) of each business, access speed, and page access speed (user perspective, Including the service response time, page rendering time, such as the speed of the Web page, the PV of each page, each page (especially the picture) daily consumption of the total bandwidth and so on. This data will provide data support for system alerts and optimizations, such as for pictures, and if we know which pictures are consuming a lot of bandwidth (not necessarily the larger picture itself, but probably the larger access), a small optimization can save a lot of network bandwidth overhead and, of course, These things are meaningless for small-scale access, and the cost of network bandwidth savings may not be as high as the human cost.
      In addition to monitoring, effective early warning mechanism is also essential, whether the application is in good service delivery, response time to meet the requirements, the system capacity to achieve a threshold. An effective early warning mechanism will allow us to deal with the problem as quickly as possible.
      9. Configure the central
      when the system is wrong, how can we recover as soon as possible, when the new service node, how to make the whole system as soon as possible to perceive it. When the system expands, the maintenance of configuration and system becomes more and more difficult if each service node or new node is removed to modify each application configuration.
     configuration centrality is a good solution to this problem, storing all configurations uniformly, and when changes occur (removing the problem node or expanding the service node or adding new services), Use some notification mechanisms to refresh the configuration for each application. Even, we can automatically detect the problem node and make intelligent switch.
      III, last
      building a service for the mass of users can be said to be difficult and challenging, Some principles and predecessors of the design of ideas can let us get some help, but the bigger challenge will come from the details, according to our technology boss, principles and ideas as long as a few books is a technician, but the ability to determine a system architect, often is the ability to deal with the details. Therefore, on the basis of mastering the principles and the design ideas of predecessors, digging deeper into the details of the technology is the winning way for the service of the massive users.

Http://blog.csdn.net/anghlq/archive/2010/08/20/5822962.aspx

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.