The story behind "Rice Noodle Festival"--development practice of Xiaomi net snapping system

Source: Internet
Author: User
Tags php server redis server



The 2014 Rice Noodle Festival



In the early hours of April 9, 2014, my colleagues and I made a final inspection and walkthrough of the snapping system for  nets. A few hours later, the first major event of the year, the Rice Noodle Festival, is about to begin.



The Rice Noodle Festival is an important examination for the adult rite of Millet e-commerce.  network from the front end of the site, back-office systems, warehousing logistics, after-sales and other aspects, will receive a comprehensive stress test.



10 o'clock the whole, a wave of traffic peak is coming, millions of users will be on the dot into the  network server. And the first to meet the pressure impact, is blocking in the front of the snapping system.



And this snapping-up system was re-developed, just online soon, this is the first time it has accepted such a severe test.



Can the system withstand the pressure? Can the business logic be executed smoothly and correctly? These problems are not at the moment of snapping up the rush, no one can be sure.



9:50, the traffic has climbed very high, 10 o'clock, the snapping system automatically opened, the shopping cart has been successfully added to buy merchandise.



After two minutes, popular snapping items have been sold out and automatically stopped snapping. The snapping system resists stress.



I breathed a sigh of relief, and the pressure accumulated before it dissipated. I sat in the corner sofa and silently recalled the thrilling story of the snapping system. This is really an adventure that few people have a chance to experience.



  How the snapping up system was born



Time back to the end of 2011.  launched its first mobile phone on August 16, 1, immediately causing a stir in the market. Subsequently, 300,000 units were booked in a day-long period. In the next few months, the 300,000  phones were shipped by automatic arranging, and all were sent out by the end of the year.



Then it is open for purchase. Initial open purchases were made directly on  mall system, but at the time we completely underestimated the power of snapping. Instantaneous outbreak of the usual dozens of times times the traffic quickly flooded the  Network mall server, database deadlock, page refresh timeout, the user purchase experience is very poor.



Market demand waits for the next round of open snapping after a week. A storm is waiting ahead, and we have only one week to go, and the whole development department is under a lot of pressure.



network can adopt the conventional optimization method is not too much, increase the bandwidth, the server, looking for code in the bottleneck point optimization code. However,  is only a small company that has just been set up for more than a year, not so many servers and bandwidth. And, if there is a bottleneck in the code, even if you can increase the server and bandwidth by one or two times times, it will be overwhelmed by the dozens of times times the instantaneous burst of load. and to optimize the code of the Mall, time is no longer possible. E-commerce website is very complex, perhaps some obscure secondary function, under high load situation will become the bottleneck point to drag down the entire site.



At this time, the development team faced a choice, is to continue to optimize the existing mall, or a separate set of snapping system? We decided to take a chance, and I worked with some of my colleagues to develop a separate snapping system, hoping for a desperate life.



Before us is a seemingly non-solution to the problem, it is to achieve the following goals:


    • Only one week, within a week to complete the design, development, testing, on-line;

    • The cost of failure can not be sustained, the system must run smoothly;

    • Buying results must be reliable;

    • In the face of mass users of concurrent snapping, merchandise can not sell super;

    • A user can only grab a mobile phone;

    • The user experience is as better as possible.


The design scheme is the solution obtained under multiple constraints. Time, reliability, and cost are the constraints we face. To solve the problem in such a short time, the simplest and most reliable technology must be chosen, and the solution must be simple enough to be proven.



In the case of high concurrency, one of the key factors that affect system performance is the consistency of data requirements. Two of the goals listed above are data consistency: the number of items remaining and whether the user has snapped up a success. If strict data consistency is to be ensured, a central server is needed in the cluster to store and manipulate this value. This can result in a single point of performance bottleneck.



In Distributed system design, there is a cap principle. "Consistency, availability, partition tolerance" three elements can only achieve two points at the same time, it is impossible to take into account the three. We have to face extreme bursts of traffic load, partition tolerance and availability are very important, so decide to sacrifice the strong consistency of data requirements.



After making this important decision, the rest of the design decisions are naturally generated:


    1. Technology to choose the most reliable, because the team with the majority of PHP, so the system using PHP development;

    2. The qualification process to be the most simplified, the user only need to point a snapping button, the return results indicate that the snapping is successful or sold out;

    3. The processing of snapping requests is as simple as possible, and I/O operations are minimized, reducing the time of each request;

    4. As far as possible to remove the performance of a single point, the pressure dispersion, the overall performance can be linearly extended;

    5. Discard data strong consistency requirements and process the data asynchronously.


The final system principle is shown in the rear of the first version snapping system schematic (Figure 1).






Figure 1 First version snapping system schematic diagram



System Fundamentals: On the PHP server, a file to indicate whether the product is sold out. If the file is present, it is sold out. After the PHP program receives the user snapping up the request, check whether the user has made an appointment and snapped, and then check if the sold-out sign file exists. For pre-booked users, if they are not sold out and the user is not snapped, the result of the snapping is returned and a log is recorded. The log is transferred asynchronously to the Central control node, and the count is completed.



Finally, snapping the list of successful users into the mall system asynchronously, snapping up successful users to order within the next few hours. In this way, traffic peaks are completely blocked by snapping up the system, the mall system does not need to face high traffic.



In the design of this distributed system, the processing of persistent data is an important factor affecting the performance. Instead of choosing a traditional relational database, we chose a Redis server. Redis is chosen for several reasons.


    1. The first data to be saved is a typical key/value pair form, with each UID corresponding to a string data. The complex function of traditional database is not used, and the KV library is suitable.

    2. Redis data is in-memory and can greatly improve query efficiency.

    3. Redis has an adequate master-slave replication mechanism, as well as a flexible set of persistent operation configurations. These two points are exactly what we need.


In the entire system, the most frequent I/O operations are PHP's read and write operations to Redis. If poorly handled, the Redis server becomes a performance bottleneck for the system.



There are three types of operations on Redis in the system: querying for appointments, whether they are snapped up successfully, and when they have been snapped to a successful state. In order to improve the overall processing capacity, can be used to read and write separation method.



All read operations are done from the library, and all writes are written to the main library only by a process on the control side.



In the PHP read operation to the Redis server, it is important to note the impact of the number of connections. If PHP accesses a Redis server through a short connection, it can clog the Redis server at peak time, causing an avalanche effect. This problem can be solved by increasing the number of Redis from the library.



For Redis writes, there is no pressure in our system. This is because the system collects the logs generated by PHP asynchronously and is written to the Redis Master Library by a process from the management side.



Another point to note is the persistent configuration of Redis. The user's reservation information is stored in Redis's process memory, which is saved once to the disk, causing a wait time. Severe words can cause the system front-end to fail to respond when snapping spikes. So try to avoid persistent operations. Our approach is that all of the reads from the library are completely off persisted, and one for the backup from the library to open the persistence configuration. Also use the log as an insurance measure for emergency recovery.



The entire system uses about 30 servers, including 20 PHP servers, and 10 Redis servers. In the ensuing snap-in, it had a smooth resistance to stress. It was really thrilling to recall the scene.



  Second Edition snapping system



After more than two years of development, millet network has become more and more mature. The company intends to hold a grand "rice Noodle Festival" in April 2014. This one-day shopping carnival is an adult ritual for e-commerce. Mall front end, inventory, logistics, after-sales and other links will undergo a test.



For the snapping system, the biggest difference is that one day to experience multi-round snapping shocks, and a variety of different products to participate in the snapping. Our previous snapping system was designed and optimized to be snapped up once a week, simply unable to support the complex activities of the Rice Noodle Festival. And after more than a year of tinkering, the first version of the system has accumulated a lot of problems, just take this opportunity to thoroughly reconstruct it.



The second edition of the system focuses on the flexibility and operational performance of the system (Figure 2). For high concurrency load capacity, stability, accuracy of these requirements, is already the basic minimum requirements. I hope that this system can be flexibly configured to support a variety of product combinations of conditions, and to lay a good foundation for future expansion.






Figure 2 The second edition of the system general structure diagram



In this version, the snapping system and the mall system are still isolated, the two systems through the agreed data structure interaction, information transmission streamlined. By snapping up the system to determine a user to seize the purchase eligibility, the user automatically in the marketplace system to add items to the shopping cart.



In the previous first version of the snapping system, we later used the go language to develop some modules, accumulated a certain amount of experience. So the core part of the second version of the system, we decided to use the go language for development.



We can let go program resident memory run, various configuration and state information can be saved in memory, reduce I/O operation overhead. For commodity quantity information, you can operate within the process. Different products can be saved to different server go process, so as to disperse the pressure, improve processing speed.



The system server is mainly divided into two tiers: HTTP service layer and business processing layer. The HTTP service layer is used to maintain user access requests, while the business processing layer is used for specific logical judgments. The data interaction between the two tiers is achieved through Message Queuing.



The main functions of the HTTP service layer are as follows:


    1. Make the basic URL correctness check;

    2. The user of malicious access to filter, intercept cattle;

    3. Provide user authentication code;

    4. Put the normal access user data into the corresponding commodity queue;

    5. Waits for the processing result returned by the business processing layer.


The main functions of the business processing layer are as follows:


    1. Receiving data from the commodity queue;

    2. Processing of user requests;

    3. Put the request result in the appropriate return queue.


The user's snapping request passes through the message queue, enters the business process layer's go process sequentially, then processes the request sequentially, returning the snapping result to the previous HTTP service layer.



Information such as the number of goods remaining in the business Layer-specific server process, respectively, according to the product number. We choose to guarantee the consistency of the product data and discard the partition tolerance of the data.



These two modules are used for request processing during the snapping process, and the system also has a corresponding policy control module, as well as a brush and System Management module (Figure 3).






Figure 3 Second Edition system detailed structure diagram



During the development of the second version of the snapping system, we encountered an excessive memory consumption problem with the HTTP layer Go program.



Because the HTTP layer is primarily used to maintain the user's access requests, the data in each request consumes a certain amount of memory, and when a large number of users access it, the memory usage is increased. When memory consumption reaches a certain level (50%), the GC mechanism in GO is getting slower, but there will still be a large number of users to access, resulting in an "avalanche" effect, the memory is rising, the final machine memory utilization will reach more than 90% or even 99%, resulting in service unavailability.



In the native HTTP package of the Go language, 8KB of memory is allocated for each request for read and write caching. In our service scenario, only the GET request, the information required by the service is contained in the HTTP header, and there is no body, which in fact does not require such a large amount of memory to be stored.



To avoid frequent requests and destruction of read and write caches, the HTTP package establishes a cache pool, but it is only 4 in length, so when a large number of connections are created, a large amount of memory is requested and new objects are created. When a large number of connections are released, it can cause many object memory to not be recycled to the cache pool, increasing the pressure on the GC.



The HTTP protocol is built on top of the TCP protocol, and the native HTTP module of Go does not provide a direct interface to close the underlying TCP connection, while HTTP 1.1 uses the Keep-alive method for the connection state by default. Thus, when the client requests the server multiple times, a TCP connection can be reused to avoid frequent setup and disconnection, causing the server to wait for the next request to be read without releasing the connection. However, there is no need for TCP connection multiplexing in our service scenario. Once a user has completed a request, they want to be able to close the connection as soon as possible. The keep-alive mode causes the user connection that has finished processing to not close as soon as possible, the connection cannot be freed, and the number of connections is increasing, affecting both the memory and bandwidth of the server.



Through the above analysis, our solution is as follows.


    1. To avoid the "avalanche effect" when the GC mechanism in the go language is not optimized, it is necessary to avoid exceeding the limit (50%) of the memory consumed by the service, and the GC can be effective when it is within this limit. You can spread the memory pressure by increasing the server, and try to optimize the memory size that the service consumes. At the same time go 1.3 also made a certain optimization of its GC.

    2. We have customized the new HTTP package for this particular service scenario, changing the TCP connection read cache size to 1KB.

    3. In a custom HTTP package, change the size of the cache pool to 1 million to avoid frequent requests and destruction of read-write caches.

    4. When each request processing is completed, the connection is actively closed by setting the header of response connection to close.


With this improvement, the maximum number of stable connections for our HTTP front-end servers can exceed 1 million.



The second version of the snapping system successfully completed the Rice Noodle Festival test.



  Summarize



Technical solutions need to be based on specific problems. Out of the application scenario, no matter how cool the technology has lost its value. The real problems facing the snapping system are complex, and we are still constantly groping for improvements.






The story behind "Rice Noodle Festival"--development practice of net snapping system


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.