Scalability Best Practices: experience from ebay

Source: Internet
Author: User

On ebay, scalability is one of the major architectural pressures we struggle to fight every day. Every structure and design decision we make can be seen in front of the body. When we are faced with hundreds of millions of users around the world, the number of page views per day is over 1 billion, and the amount of data in the system is computed using a skin byte (1015 or 250)-scalability is a matter of life and death interrelated. in a scalable architecture, resource consumption should be increased linearly (or better) with load, measured by user traffic, data, and so on. If performance measures the resource consumption required per unit of work, scalability is the measure of resource consumption as the number or size of work units increases. In other words, scalability is the whole price-the shape of the performance curve, not the value of a point on the curve.

Scalability has many facets--aspects of the transaction, operational aspects, and development aspects. We have learned a lot from the process of improving the transaction throughput of a web system, and this article summarizes some of the key best practices. There may be many best practices that you may feel familiar with or that you have never met before. These are the collective experience of developing and operating the ebay site. Best Practice #1: Split by Function

The relevant functional parts should be combined, and the unrelated functional portions should be split-whether you call it SOA, function decomposition, or engineering tips. Moreover, the looser the coupling between unrelated features, the more flexible it is to scale out a portion of it independently.

At the coding level, we use this principle all the time. Jar files, packages, bundle, and so on, are the mechanisms used to isolate and abstract functionality.

At the application level, ebay divides the different functions into several application pools. The sales function is run by a set of application servers, the bidding function is owned by another group, and the search is another group of servers. We divided a total of about 16,000 application servers into 220 pools. This allows one pool to be scaled individually, depending on the resource consumption of a feature. We have thus been able to further isolate and rationalize resource dependencies-such as a relatively small subset of the sales pool that only needs to access the backend resources.

At the database level, we are doing the same. ebay does not have an all-encompassing single database, instead we have a set of database hosts that store user data, a set of stored commodity data, a set of store-purchase data ... A total of 1000 logical databases are distributed on 400 physical hosts. Again, this approach allows us to scale our database facilities for a particular type of data alone. Best Practice #2: Horizontal Segmentation

Partitioning by function can help us a lot, but it's not enough to get a fully scalable architecture. Even if the function one by one is decoupled, the resource requirements for individual features will grow over time and may still exceed the capabilities of a single system. We often remind ourselves that "there is no expansion without segmentation." Within a single function, we need to be able to break down the workload into a number of small units that we have the ability to manage, so that each unit can maintain a good performance price ratio. This is the time to split up horizontally.

At the application level, because ebay has a variety of interactions designed to be stateless, horizontal segmentation is a breeze. The flow of incoming traffic is routed using a standard load-balancing server. All application servers are equal, and any server will not maintain transactional status, so load Balancing can select an application server at any one. If you need more processing power, simply add a new application server.

Database-level issues are more challenging because the data is inherently stateful. We divide the data horizontally (or "sharding") according to the main access path. For example, user data is currently split to 20 hosts, each hosting 1/20 of users. As the number of users grows and the amount of data per user grows, we will add more hosts and spread the user to more machines. Commodity data, purchase data, account data, and so on are all handled in the same way. With use cases, we divide the data in different ways: some are simple modulo the primary key (the ID mantissa is 1 to the first host, the mantissa is two to the next, and so on, and some are divided according to the ID interval (1-1m, 1-2m, etc.), some with a lookup table, and some are integrated above strategy. However, the overall idea is that the infrastructure that supports data segmentation and fragmentation is far more scalable than unsupported. Best Practice #3: Avoid distributed transactions

See here, you may be wondering how the practice of dividing data and horizontally dividing data by function satisfies transaction requirements. After all, almost any meaningful operation has to be updated with more than one entity--an immediate example of users and commodities. The Orthodox, well known answer is: Create distributed transactions across resources, with two-segment submissions to ensure that either all resources are updated or none are updated. Unfortunately, the cost of such a pessimistic programme is considerable. Scalability, performance, and response latency are negatively affected by the cost of coordination, which can deteriorate exponentially as the number of dependent resources and the number of customers increases. Availability is also limited because all dependent resources must be in place. The pragmatic answer is that for unrelated systems, the guarantee of their cross system transactions is relaxed.

It's impossible to get the most out of both worlds. It is often neither necessary nor realistic to ensure instant consistency across multiple systems or partitions. Eric Brewer, Inktomi, said the cap axiom ten years ago: three key indicators of distributed systems--consistency (consistency), availability (availability), and partitioning tolerance (partition-tolerance) At any given moment, only two can be created at the same time. For high-traffic sites, we have to choose the partitioning tolerance because it is the fundamental to scalability. For 24x7 Web sites, it is also natural to choose availability. So I had to give up instant consistency (immediate consistency).

On ebay, we absolutely do not allow any kind of client or distributed transaction--so no two-paragraph submissions are required. In some carefully defined cases, we will bundle several statements made for the same database into a single transactional operation. For most operations, a single statement is automatically submitted. While we deliberately relaxed the orthodox ACID properties to ensure instant consistency everywhere, the real result is that most systems are available for most of the time. Of course, we also use some technology to help the system achieve final consistency (eventual consistency): carefully adjust the order of database operations, asynchronous recovery events, and data reconciliation (reconciliation) or centralized accounts (settlement Batches). Choose which technology to decide based on the need for consistency for a particular use case.

The key to architects and system designers is to understand that consistency is not a single topic of "having" and "not". Most use cases in reality do not require immediate consistency.  Just as we often weigh availability based on cost and other stress factors, consistency can also be tailored to ensure the appropriate degree of consistency based on the needs of specific operations. Best Practice #4: Decoupling programs with asynchronous policies

Another key measure to improve scalability is the active adoption of asynchronous policies. If component A synchronizes the invocation of component B, then A and B are tightly coupled, and a tightly coupled system whose scalability feature is that each part must be in the same way-scaling a must simultaneously scale B. The components that are called synchronously face the same problem in terms of usability. We return to the most basic logic: If a launches B, then non-B launches non-A. In other words, if B is not available, then a is not available. If the links between A and B are asynchronous, they can be scaled separately, whether through queues, multicast messages, batches, or other means. Also, the usability features of a and b are independent of each other--even if B is trapped or dead, a can still move forward.

This principle should be implemented throughout the infrastructure from top to bottom. Even within a single component, asynchrony can be achieved through technologies such as SEDA (phased event-driven architecture, staged Event-driven architecture), while maintaining an Easy-to-understand programming model. The same principle is observed between components-avoid the coupling of synchronization as much as possible. In most cases, two components will not have a direct business connection in any event. At all levels, the process is decomposed into phases (stages or phases) and then asynchronously connected, which is the key to scaling. Best Practice #5: Turning a process into an asynchronous stream

Decouple the program with the principle of asynchrony to make the process asynchronous as much as possible. For systems that require rapid response, this can fundamentally reduce the response latency experienced by the requester. For Web sites or trading systems, sacrificing data or executing latency (completing all work practices) in exchange for user latency (the time the user gets the response) is worthwhile. Activities tracking, document billing, accounts and statements and other processing processes should obviously belong to the background activities. There are often a number of steps in the main use case process that can be broken into an asynchronous operation. Anything that can be done later should be done later.

There is also an equally important aspect to realize that there are few people: Asynchrony can fundamentally reduce the cost of infrastructure. Performing the operation synchronously forces you to have the infrastructure in line with the peak of the load-even on the heaviest day of the task, the facility must be able to complete the process immediately. And by turning expensive processing into an asynchronous stream, the infrastructure does not need to be equipped with the peak, only to meet the average load. And there is no need to process all requests immediately, the asynchronous queue can spread the processing task to a longer period of time, thus playing the role of shaving. The greater the load change of the system, the more spikes the curve will benefit from the asynchronous processing. Best Practice #6: Virtualization of all levels

Virtualization and abstraction are ubiquitous, and there is an old saying in computer science that all problems can be solved by adding an indirect level. The operating system is an abstraction of hardware, and the virtual machines used in many modern languages are abstractions of the operating system. The object-relational mapping layer abstracts the database. The load balancer and the virtual IP abstract the network terminal. When we increase the scalability of the infrastructure by segmenting data and programs, adding additional virtual layers to the various partitions becomes a high level of emphasis.

On ebay, we virtualized the database. The application interacts with the logical database and the logical database is mapped to a specific physical machine and database instance according to the configuration. Applications are also abstracted from the routing logic that performs data partitioning, which assigns specific records, such as user xyz, to the specified partition. Both types of abstraction are implemented on our own development of the O/R layer. After virtualization, our operations teams can redistribute logical hosts on the physical host cluster-detach, merge, move-without the need to touch the application code.

Search engines are also virtualized. To get the search results, an aggregator component performs parallel queries on multiple partitions, but this highly segmented search grid is a single logical index to the customer.

The above measures are not just for the convenience of programmers, but also a big motivation for operational flexibility. Hardware and software systems will fail and requests need to be rerouted. components, machines, partitions will be increased or decreased, moving. Using virtualization wisely can woolly the above changes in the top-level facilities, and you'll have room to maneuver. Virtualization makes it possible to scale the infrastructure because it makes scaling manageable. Best Practice #7: Use caching appropriately

Finally, the cache should be used appropriately. The advice given here is not necessarily universally applicable, because caching is highly efficient and relies heavily on the details of use cases. Ultimately, the ultimate goal of an efficient caching system is to maximize the cache's hit ratio in terms of storage constraints, demand for availability, tolerance of stale data, and so on. Experience has shown that it is extremely difficult to balance many factors, and that the situation is likely to change over time even if the goals are met temporarily.

The most appropriate caching is data that is rarely changed and read-oriented-such as metadata, configuration information, and static data. On ebay, we actively cache this type of data and use both "push" and "pull" methods to keep the system up to date with a certain level of update synchronization. Reducing repeated requests for the same data can have a very significant effect. frequently changing, reading and writing data is difficult to cache effectively. On ebay, most of us consciously avoid such problems. We have not done any caching of session data that is short-lived between requests. It also does not cache shared business objects, such as merchandise and user data, in the application tier. We deliberately sacrifice the potential benefits of caching this data in exchange for availability and correctness. It must be noted that other websites have taken different approaches, made different trade-offs and have also achieved success.

Good things can do too much. The more memory you allocate to the cache, the less memory you can use to service a single request. The application layer often has low memory pressure, so this is a very realistic trade-off. More importantly, when you start relying on caching, the primary system only needs to meet the processing requirements of the cache misses, and naturally you think you can cut the main system. But when you do this, the system is completely off the cache. Now the main system is not able to directly deal with all traffic, that is, the availability of the site depends on whether the cache 100% normal operation-the potential of the crisis. Even routine operations, such as reconfiguring cache resources, moving the cache to another machine, and cooling the cache server, can cause serious problems.

Well done, the caching system bends the flex curve downward, which is better than linear growth--the subsequent request to fetch data from the cache is cheaper than the primary storage. Conversely, poor caching can introduce a considerable amount of extra overhead that can hamper usability. I've never seen a system that doesn't have a chance to get the cache going, and the key is to find the appropriate caching strategy based on the situation. Summary

Scalability is sometimes called "non-functional requirements," meaning it has nothing to do with functionality and is less important. That's a mistake to say. My view is that scalability is a prerequisite for functionality-a requirement of 0 priority, higher than the priority of all requirements.

Hopefully the above best practices will work for you, hoping to help you look at your system from a new perspective, regardless of its size. reference to EBay ' s architectural Principles (VIDEO) Werner Vogels on Scalability Dan Pritchett on your Scaled Your? The Coming of the Shard Trading consistency for availability in distributed architectures Eric Brewer on the CAP theorem S Eda:an architecture for well-conditioned, scalable Internet Services about authors

Randy Shoup is an outstanding architect for ebay. As the principal architect of ebay's search infrastructure since 2004. Prior to joining ebay, he was the chief architect of Tumbleweed Communications, and also held multiple software development and architecture positions at Oracle and Informatica.

He often teaches scalability and architectural patterns in industry meetings.

Read the English text: Scalability Best practices:lessons from EBay

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.