Structural design analysis of anatomical twitter:twitter system

Source: Internet
Author: User
Keywords SMS HTTP cache but

With the explosion of information, micro-blogging website Twitter was born. It is no exaggeration to describe Twitter's growth with the word "born". Twitter has grown from 0 to 66,000 since May 2006, when the number of Twitter users rose to 1.5 in December 2007. Another year, December 2008, Twitter's number of users reached 5 million. [1]

The success of Twitter is a prerequisite for the ability to provide services to millions of users at the same time and to deliver services faster. [2,3,4]

There is a view that Twitter's business logic is simple, so the competition threshold is low. The first half of the sentence is correct, but the second half is open to discussion. The competitiveness of Twitter is inseparable from rigorous system architecture design.

"1" Everything starts easy

Twitter's core business logic is following and be followed. [5]

Go to the Twitter Personal homepage and you'll see the authors, the recently published microblogging following. The so-called micro-blog, is a text message, Twitter stipulates that the length of text messages should not exceed 140 words. Text messages can include not only plain text information, but also URLs, points to a Web page, photos and videos, and so on. This is the following process.

When you write a text message and publish it, your followers will immediately see the latest text you have written on their personal homepage. This is the process of being followed.

It seems easy to implement this business process.

1. For each registered user to order a be-followed table, the main content is the ID of each follower. At the same time, also custom-made a following table, the main content is the ID of each following author.

2. When users open their personal space, Twitter first looks up the following table and finds the ID of all following authors. Then go to the database to read the recent text written by each author. The totals are displayed in chronological order on the user's personal home page.

3. When a user writes a message, Twitter first looks at the be-followed table and finds all followers IDs. Then update the followers's home page one by one.

If follower is reading his Twitter homepage, the JavaScript implied in the homepage will automatically visit the Twitter server every dozens of seconds to check for updates to the personal homepage that is being viewed. If there is an update, download the new home page content immediately. So follower can read the latest text.

From the author to the reader, the middle delay depends on the JavaScript update interval and the time the Twitter server updates each follower's home page.

From the system architecture, it seems that the traditional three-segment theory (three-tier architecture [6]) is sufficient to satisfy this business logic. In fact, the original structure of the Twitter system is really three paragraphs.

Reference:

[1] fixing Twitter. (http://www.bookfm.com/courseware/coursewaredetail.html?cid=100777)
[2] Twitter blows up at SXSW conference. (http://gawker.com/tech/next-big-thing/twitter-blows-up-at-sxsw-conference-243634.php)
[3] The Hand Accounts of terrorist attacks in India on Twitter and Flickr. (http://www.techcrunch.com/2008/11/26/first-hand-accounts-of-terrorist-attacks-in-india-on-twitter/)
[4] Social Media takes Center Stage in Iran. (http://www.findingdulcinea.com/news/technology/2009/June/Twitter-on-Iran-a-Go-to-Source-or-Almost-Useless.html)
[5] those of Twitter. (http://www.ccthere.com/article/2363334) (http://www.ccthere.com/article/2369092)
[6] Three tier architecture. Http://en.wikipedia.org/wiki/Multitier_architecture

"2" three-paragraph theory

The traditional approach of the website architecture design is syllogism. The so-called "traditional" is not synonymous with "outdated". Architectural design of large web sites, emphasizing practicality. Trendy design, although attractive, but the technology may be immature, high risk. So, a lot of big websites, walk is the way of the Conservative tradition.

When Twitter was online in May 2006, they used ruby-on-rails tools to simplify the development of the site, and Ruby-on-rails's design thought was three paragraphs.

1. The previous segment, the presentation layer (presentation Tier), is the Apache Web Server, the main task of which is to parse the HTTP protocol and distribute the different types of requests from different users to the logical layer.

2. The tools used in the middle, the logical layer (Logic Tier) are mongrel rails Server, which uses rails ' ready-made modules to reduce the amount of development effort.

3. The tool used in the back segment, the data layer (Tier), is the MySQL database.

First, the data layer.

Twitter services can be summed up as two cores, 1. User, 2. Sms. The relationship between users and users is the relationship between chasing and being chased, that is, following and be followed. For a user, he only reads text messages written by those who "chase" them. And he wrote the text, only those who "chase" their own people will read. With these two cores, it's not hard to understand how Twitter's other features are implemented [7].

Around these two cores, you can begin to design the data Schema, which is the way in which you are organizing Tier the data layer. You may wish to set three tables [8],

1. User table: User ID, name, login name and password, status (online or not).

2. Short message table: SMS ID, author ID, body (fixed length, 140 words), timestamp.

3. User Relations table, record the relationship between chasing and chasing: User ID, he chases the user IDs (following), chasing his user IDs (be followed).

Again middle, logical layer.

When a user posts a message, perform the following five steps,

1. Record the message to the "message table".

2. Retrieve the IDs of the users who are chasing him from the user relations table.

3. Some of his users are currently online, others may be offline. The status of online or not, can be found in the user table. Filter out IDs for offline users.

4. Push the IDs of those who are chasing him and are currently online, one at a time (queue).

5. From this queue, remove the IDs of the users who are chasing him and are currently online, and update the home page of these people, adding the latest message.

These five steps are the logical layer (Logic Tier) responsible. The first three steps are easy to solve and are simple database operations. The last two steps require an auxiliary tool, a queue. The meaning of queues is to separate the generation of tasks from the execution of tasks.

Queues can be implemented in a variety of ways, such as Apache mina[9] for queues. But the Twitter team implemented a queue of its own, Kestrel [10,11]. Mina and Kestrel, each have their own advantages and disadvantages, it seems that no one has done a detailed comparison.

Both Kestrel and Mina look complicated. Perhaps someone asked why not use simple data structures to implement queues, such as dynamic lists or even static arrays? If the logic layer is running on only one server, a simple data structure such as dynamic linked list and static array can be used as a queue. The "heavyweight" queues of Kestrel and Mina are meant to support distributed queues that connect multiple machines. It will be highlighted later in this series.

Finally, the first paragraph, the presentation layer.

The main functions of the presentation layer are two, 1. The HTTP protocol processor (HTTP Processor), including the user requests that are being disassembled, and the results that the encapsulation needs to emit. 2. Distributor (Dispatcher), the received user request, distributed to the logical layer of machine processing. If the logic layer has only one machine, then the distributor is meaningless. But if the logic layer is composed of many machines, what kind of request, to the logic layer inside which machine, it is very exquisite. Many machines in the logic layer may be dedicated to specific functions, while in the same functional machines, it is equally possible to share work and balance the load.

Visit the Twitter site, not only the browser, but also mobile phones, as well as computer desktop tools such as QQ, as well as a variety of web Plug-ins in order to link other sites to twitter.com [12]. As a result, the communication protocol between Twitter's visitors and Twitter sites is not necessarily HTTP, but there are other protocols.

The three-paragraph Twitter architecture is primarily for HTTP protocol terminals. But for the terminals of other protocols, the Twitter architecture is not clearly divided into three segments, but rather a combination of presentation and logic, which are often referred to as "APIs" in the Twitter literature.

To sum up, a simple architecture that completes the basic functions of Twitter is shown in Figure 1. Perhaps everyone will feel puzzled, so famous website, the structure is so simple? Yes and no,2006 when Twitter first came online in May, Twitter architecture and Figure 1 did not have much of a difference, unlike the addition of some simple caches. Even now, Twitter's architecture can clearly see the contours of Figure 1.

Figure 1. The essential 3-tier of Twitter architecture
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/d22c_4051785892_e677ae9d33_o.png

Reference,

[7] Popular tools for tweets (http://www.ccthere.com/article/2383833)
[8] Build a PHP based microblogging service (http://webservices.ctocio.com.cn/188/9092188.shtml)
[9] Apache Mina homepage (http://mina.apache.org/)
[A] Kestrel Readme (Http://github.com/robey/kestrel)
[One] A sharable Guide to Kestrel. (HTTP://GITHUB.COM/ROBEY/KESTREL/BLOB/MASTER/DOCS/GUIDE.MD)
[of] alphabetical List of Twitter Services and Applications (http://en.wikipedia.org/wiki/List_of_Twitter_services_and_applications)

"3" Cache = = Cash

Cache = = Cash, caching equals cash income. Although this is a bit exaggerated, the correct use of caching for the construction of large Web sites is a vital event. The response speed of a website in response to a user request is a major factor affecting the user experience. There are many reasons to affect speed, one of the important reasons is the hard disk read/write (disk IO).

Table 1 compares the speed of memory (RAM), hard disk, and new Flash (flash) in terms of reading and writing. Hard drive Read and write, slower than memory millions. Therefore, to improve the speed of the website, an important measure is to cache the data in memory as much as possible. Of course, a copy must be kept on the hard drive to prevent the loss of data in memory in the event of a power outage.

Table 1. Storage Media comparison of Disk, Flash and RAM [13]
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/9d42_4060534279_f575212c12_o.png

The Twitter engineer believes that a user-experienced site, when a user requests to arrive, should complete the response within an average of 500ms. And Twitter's ideal is to reach the 200ms-300ms response rate [17]. So on the website architecture, Twitter uses caching on a large scale, multi-level and multiple ways. Twitter's practice of caching and lessons learned from these practices is a big part of the Twitter web architecture.

Figure 2. Twitter Architecture with Cache
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/0ea9_4065827637_bb2ccc8e3f_o.png

Where do I need to cache? Where disk IO is frequent, the more cache is needed.

As mentioned earlier, the Twitter business has two core users and text messages (tweets). Around these two cores, there are several tables in the database, the most important of which are three, as shown below. The setting of these three tables is a bystander's guess, not necessarily consistent with Twitter's settings. But ça, believe that even if different, will not be essential difference.

1. User table: User ID, name, login name and password, status (online or not).
2. Short message table: SMS ID, author ID, body (fixed length, 140 words), timestamp.
3. User Relations table, record the relationship between chasing and chasing: User ID, he chases the user IDs (following), chasing his user IDs (be followed).

Is it necessary to store all the core database tables in the cache? Twitter's approach is to disassemble the tables and put the most frequently read columns in the cache.

1. Vector Cache and Row cache

Specifically, the most important column that Twitter engineers think is IDs. That is, the IDs of newly published SMS messages, the IDs of the popular messages that are frequently read, the IDs of the relevant authors, and the IDs of readers who subscribe to these authors. Store these IDs in cache (Stores arrays of tweets Pkeys [14]). In the Twitter literature, the cache space for storing these IDs is called vector cache [14].

Twitter engineers believe that the most frequently read content is the IDs, and the text of the text is second. So they decided that, under the precondition of giving priority to the resources required for vector cache, the next important task was to set up a row Cache for storing text messages.

The hit rate (Hit Rate or Hit Ratio) is the most important metric for measuring the cache effect. If one or more users read 100 items, 99 of which are stored in the cache, the cache hit rate is 99%. The higher the hit rate, the greater the contribution of the cache.

After setting up the vector cache and row cache, we observed the results of the actual operation, and found that the vector cache hit 99% and the row cache hit rate was 95%, confirming what the Twitter engineer had previously bet on before the body of the IDs.

Vector cache and row cache, the tools used are open source memcached [15].

2. Fragment Cache and Page cache

The previous article said, visit the Twitter site, not only the browser, but also mobile phones, as well as computer desktop tools such as QQ, as well as a variety of web plug-ins, in order to link other sites to twitter.com [12]. Hosting these two types of users is a web channel with the Apache Web server portal, and a channel called the API. Which API channel to accept the flow of the total flow of 80%-90% [16].

So, after the vector cache and row cache, the Twitter engineers focused on how to improve the response speed of the API channel.

The main body of the reader's page shows a message after another. The whole page may be divided into several parts, each local corresponding to a message. The so-called fragment, refers to the part of the page. In addition to text messages, other content such as Twitter logo, etc., is also fragment. If a writer has many readers, caching the Text layout page (Fragment) written by the author will improve the overall reading efficiency of the site. This is the mission of fragment cache.

For some of the most popular authors, readers will not only read his text, but also visit his homepage, so it is necessary to cache the personal homepage of these popular authors. This is the mission of page cache.

Fragment cache and Page cache, the tool used is also memcached.

Observing the actual running results, the Fragment cache hit 95%, and page cache hit only 40%. Page cache has a low hit rate, but its content is the entire personal home page, so it takes up a large amount of space. To prevent page cache from competing for fragment cache space, when physically deployed, the Twitter engineers separated page cache from different machines.

3. HTTP Accelerator

Solves the problem of caching the API channel, and then the Twitter engineers are working on the caching of the Web channel. After analysis, they think that the pressure of web channels, mainly from the search. Especially in the face of emergencies, readers will search for relevant text messages, regardless of the text of the author, is not their own "chasing" those authors.

To reduce the search pressure, you may want to search the keyword, and its corresponding search results, caching. The caching tool used by Twitter engineers is open source project varnish [18].

The interesting thing is that the varnish is typically deployed outside of the Web server, facing the Internet location. This way, when a user accesses a Web site, he actually accesses the varnish and reads the desired content. A user request is forwarded to the Web server only if the varnish does not cache the appropriate content. Twitter's deployment, however, is to put varnish on the inside of the Apache WEB Server [19]. The reason is that Twitter engineers have found the varnish operation more complicated, and they have taken this odd and conservative approach in order to reduce the likelihood that the varnish crash would cripple the entire site.

The primary task of Apache Web Server is to parse HTTP and distribute tasks. Different Mongrel Rails server is responsible for different tasks, but most mongrel rails server will have to contact vector cache and row cache to read the data. How does Rails server contact memcached? The Twitter engineers themselves developed a rails plug-in (Gem) called Cachemoney.

Although Twitter did not disclose the varnish hit rate, [17] claimed that after using varnish, the entire twitter.com site's load dropped by 50%, see Figure 3.

Figure 3. Cache decreases twitter.com load by 50% [17]
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/0b3a_4061273900_2d91c94374_o.png

Reference,

[] Alphabetical List of Twitter Services and applications.
(http://en.wikipedia.org/wiki/List_of_Twitter_services_and_applications)
[+] How flash changes the DBMS World.
(http://hansolav.net/blog/content/binary/HowFlashMemory.pdf)
[improving] running component of Twitter.
(http://qconlondon.com/london-2009/file?path=/qcon-london-2009/slides/
Evanweaver_ improvingrunningcomponentsattwitter.pdf)
A high-performance, general-purposed, distributed memory object Caching system.
(http://www.danga.com/memcached/)
Updating Twitter without service disruptions.
(http://gojko.net/2009/03/16/qcon-london-2009-upgrading-twitter-without-service-disruptions/)
[fixing] Twitter. (http://assets.en.oreilly.com/1/event/29/
Fixing_twitter_improving_the_performance_and_scalability_of_the_ World_s_
Most_popular_micro-blogging_site_presentation%20presentation.pdf)
Varnish, a high-performance HTTP Accelerator.
(HTTP://VARNISH.PROJECTS.LINPRO.No/)
[a] how do you varnish in twitter.com?
(http://projects.linpro.no/pipermail/varnish-dev/2009-February/000968.html)
[Cachemoney] Gem, an open-source Write-through caching library.
(Http://github.com/nkallen/cache-money)

"4" Floods need quarantine

If using cache is a big aspect of Twitter, the other aspect is its message queue. Why use Message Queuing? [14] The explanation is "isolate user requests and related operations so that the peak flow (move operations out of the synchronous request cycle, amortize load over time)".

To understand the meaning of this passage, consider an example. January 20, 2009 Tuesday, President Barack Obama took office and delivered his speech. As the first black president in American history, Obama's inauguration sparked a backlash that led to a surge in Twitter traffic, as shown in Figure 4.

Figure 4. Twitter burst during the inauguration of Barack Obama, 1/20/2009, Tuesday
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/d594_4071879010_19fb519124_o.png

At peak times, the Twitter site received 350 new messages per second, a traffic peak of about 5 minutes. According to statistics, the average Twitter user is "chased" by 120 people, that is to say, the average of 350 messages is sent 120 times [16]. This means that at this 5-minute peak, the Twitter site needs to send 120 = 42,000 messages per second.

Facing the flood peak, how can we ensure that the site does not crash? The approach is to quickly accept, but postpone the service. For example, during the dinner rush, restaurants are often full. For new customers, the restaurant attendants are not shut out but let the customers wait in the lounge. This is what [14] says, "Isolating user requests and related operations in order to flatten traffic peaks".

How can quarantine be implemented? When a user visits the Twitter site, it is the Apache web Server that receives him. Apache does a very simple thing, it will be the user's request resolution, forward to mongrel Rails Sever, mongrel is responsible for the actual processing. and Apache Teng shot to meet the next user. This avoids the embarrassment of users not being connected to the Twitter site during the flood peak.

Although Apache's work is simple, it does not mean that Apache can host an unlimited number of users. The reason is that Apache resolves the user request, and after forwarding to the Mongrel server, the process that resolves the user request is not immediately released, but instead goes into an empty loop, waiting for the Mongrel server to return the results. In this way, the number of users that Apache can host at the same time, or more accurately, the number of concurrent connections that Apache can hold (concurrent 50x15), is actually subject to the number of processes that Apache can hold. Process mechanisms within the Apache system see Figure 5, where each worker represents a process.

How many concurrent connections can Apache hold? [22] The experimental results are 4,000, see Figure 6. How can you improve the concurrent user capacity of Apache? One idea is to not let the connection be subject to the process. You might want to put the connection as a data structure, store it in memory, and release the process until the Mongrel server returns the result, then reload the data structure into the process.

In fact Yaws Web server[24], that's what it does [23]. So it's not surprising that Yaws can hold more than 80,000 concurrent connections. But why does Twitter use Apache instead of yaws? Perhaps because the Yaws is written in Erlang, and the Twitter engineer is unfamiliar with the new language (But you need into house Erlang experience [17]).

Figure 5. Apache Web Server System architecture [21]
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/3b81_4071355801_db6c8cd6c0_o.png

Figure 6. Apache vs. Yaws.
The horizonal axis shows the parallel requests,
The vertical one shows the throughput (Kbytes/second).
The red curve is yaws and running on NFS.
The blue one is Apache, running on NFS,
While the "Green one is" also Apache but on a local file system.
Apache dies at about 4,000 parallel sessions,
While the yaws is e.g. functioning in over 80,000 parallel 50x15. [22]
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/8fa1_4072077210_3c3a507a8a_o.jpg

Reference,

[Improving] running component of Twitter.
(http://qconlondon.com/london-2009/file?path=/qcon-london-
2009/slides/evanweaver_improvingrunningcomponentsattwitter.pdf)
[] Updating Twitter without service disruptions.
(http://gojko.net/2009/03/16/qcon-london-2009-upgrading-
twitter-without-service-disruptions/)
[Fixing] Twitter. (Http://assets.en.oreilly.com/1/event/29/Fixing_Twitter_
Improving_the_performance_and_scalability_of_the_world_s_most_popular_
Micro-blogging_site_presentation%20presentation.pdf)
Apache system architecture.
(http://www.fmc-modeling.org/download/publications/
Groene_et_al_2002-architecture_recovery_of_apache.pdf)
[Yaws] Apache vs. (http://www.sics.se/~joe/apachevsyaws.html)
[23] Query Apache and yaws performance comparisons. (http://www.javaeye.com/topic/107476)
[Yaws] Web Server. (http://yaws.hyber.org/)
[[] Erlang programming Language. (http://www.erlang.org/)

"5" Data flow and control flow

By letting the Apache process empty loop, quickly accept the user's access, postpone the service, plainly is a tactic, the purpose is to let users not receive "HTTP 503" error prompts, "503 error" means "service is not available (services unavailable)", The site is denied access.

Dayu flood water, emphasis on dredging. The real ability of flood fighting is embodied in two aspects of flood storage and flood discharge. The flood storage is easy to understand, is to build a reservoir, or to build a large reservoir, or build a lot of small reservoirs. The spillway consists of two aspects, 1. Drainage, 2. Channels。

For the Twitter system, a large server cluster, especially the memcached cache, embodies the capacity of the flood storage. The means of drainage are Kestrel message queues, which are used to pass control instructions. Channel is the data transmission channel between machine and machine, especially the data channel leading to memcached. The advantages and disadvantages of the channel is whether it is unobstructed.

Twitter's design, and Dayu's approach, the shape of alike, real close. The flood control measures of the Twitter system are effective in controlling the data flow, ensuring that the data can be evacuated to multiple machines in time when the flood peak arrives, thus avoiding the excessive concentration of pressure and the paralysis of the whole system.

In June 2009, Purewire climbed the Twitter site to track the "Chasing" and "chasing" relationships between Twitter users, estimating the total number of Twitter users at around 7,000,000 [26]. Among these 7 million users, they do not include orphaned users who neither chase nor be chased by others. Also does not include the isolated island crowd, the user of the island only chases and is chasing each other, does not contact with the outside world. If you add these orphaned and isolated users, the total number of Twitter users is probably no more than 10 million.

As of March 2009, China Mobile has reached 470 million users [27]. If China Mobile's flying letter [28] and 139 lobbyists [29] also want to go to Twitter direction, then the letter and 139 of the flood fighting capacity should be designed to how much? Simply put, the current size of the Twitter system needs to be magnified at least 47 times times. So some people are commenting on the mobile internet industry, "something that can be done in China, in America." Conversely, not tenable. "

But in any case, the stone of the mountain can attack Jade. This is the purpose of our research on Twitter's system architecture, especially its flood control mechanism.

Figure 7. Twitter Internal flows
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/fe8f_4095392354_66bd4bcc30_o.png
Here is a simple example of the internal process of the Twitter site, to examine the mechanism of the Twitter system to achieve the three elements of flood control, "reservoir", "drainage" and "channels."
Suppose there are two authors who post text messages on Twitter via a browser. There is a reader, also through the browser, to visit the website and read the text they wrote.

1. The author's browser establishes a connection to the Web site, and the Apache WEB server assigns a process (Worker process). The author logs in, Twitter looks for the author's id, and as a cookie, the memory is in the header attribute of the HTTP parcel.

2. The browser uploaded the author's new text message (Tweet), Apache received a text message, the text message along with the author ID, forwarded to mongrel Rails Server. Then the Apache process goes into the empty loop, waiting for mongrel to update the author's homepage and add the new text.

3. Mongrel receives the text message, assigns an ID to the text message, and then caches the SMS ID and the author ID to the vector memcached server.

At the same time, mongrel lets vector memcached find out which readers "chase" the author. If the vector memcached does not cache this information, the vector memcached automatically goes to the MySQL database to find the results and cache them for future needs. Then, return the reader IDs to mongrel.

Next, mongrel the text message ID and text message, cache to row memcached server up.

4. Mongrel notifies the kestrel Message Queuing server that a queue is opened for each author and reader, and the name of the queue implies a user ID. If these queues already exist in the Kestrel server, then the queues are used.

For each message, mongrel already knows from Vector memcached which readers are chasing the author of the text message. Mongrel the ID of this message into each reader's queue and the author's own queue.

5. The same mongrel server, or another mongrel server, resolves the corresponding user ID from the name of the queue before processing a message in a Kestrel queue, which can be either a reader or an author.

Then mongrel from the Kestrel queue, extracts the message one by one, parsing the SMS ID contained in the message. And from the row memcached cache, find the text message corresponding to this SMS ID.

At this time, mongrel both got the user ID, also got the text message. Next mongrel to update the user's homepage, add the text of the text.

6. Mongrel the updated author's homepage to the process of Apache in the empty loop. The process actively transmits (push) the author's homepage to the author's browser.

If the reader's browser has previously logged on to the Twitter site and established a connection, Apache has also assigned a process to the reader, and the process is also in an empty loop state. Mongrel the updated Reader's homepage to the corresponding process, which sends the reader's homepage to the reader's browser.

At first glance, the process seems uncomplicated. "Reservoir", "drainage" and "channel", this three elements of flood fighting is embodied in where? What's the beauty of under's Twitter? It's worth a lot of scrutiny.

Reference,

[num] Twitter user statistics by Purewire, June 2009.
(http://www.nickburcher.com/2009/06/twitter-user-statistics-purewire-report.html)
[27] As of March 2009, China Mobile has reached 470 million users.
(http://it.sohu.com/20090326/n263018002.shtml)
[28] China Mobile flying letter net. (http://www.fetion.com.cn/)
[29] China Mobile 139 lobbyist network. (http://www.139.com/)

"6" Flow peak and cloud computing

The previous article enumerated a 6-step message from publishing to being read and Twitter's business logic. On the surface it seems to be boring, but chew it carefully, and each step unfolds, there is a story.

The American rugby final, nicknamed the Super Bowl (Bowl). Super Bowl in the United States, the equivalent of China's CCTV Spring Festival party. February 3, 2008, Sunday, the year Super Bowl held as scheduled. New York Giants (Giants), against the Boston Patriots (Patriots). This is two teams of equal strength, the final result is unpredictable. The game attracted nearly 100 million Americans to watch live television broadcasts.

For Twitter, it is expected that Twitter traffic will surely rise during the game. The fiercer the race, the higher the flow. What Twitter does not anticipate is how much traffic will rise, especially in peak periods.

According to the statistics of [31], in the Super Bowl competition, the average flow rate per minute is 40% higher than the average flow rate of the day. In the most intense competition, more than 150%. With a week ago, January 27, 2008, a quiet Sunday of the same period of time, the flow of fluctuations from the average 10%, rose to 40%, the highest volatility from 35%, rose to more than 150%.

Figure 8. Twitter traffic during Super Bowl, Sunday, Modified 3, 2008 [31]. The Blue line represents the percentage of updates/minute during the Super Bowl normalized to the average number of updates per minute during the rest of the day, with spikes annotated-to-show what arranges. The Green line represents the traffic of a "regular" Sunday, 27, 2008.
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/d4fa_4085122087_970072e518_o.png

This shows that Twitter traffic is very volatile. For Twitter, these devices are idle for most of their time, and are not economically available if they are purchasing enough equipment to withstand changes in traffic, especially the peak flow caused by major events. But without enough equipment, the Twitter system could collapse in the face of major events, with the result that users would lose.

What to do? The way is to change to buy for rent. The size of the device that Twitter purchases itself is limited to the amount of traffic pressure that is not critical. At the same time leasing cloud computing platform Company's equipment to cope with major events to the temporary peak flow. The advantage of leasing cloud computing is that computing resources are allocated in real time, and when demand is high, more computing resources are allocated automatically.

Twitter has been leasing Joyent's cloud computing platform until 2008. On the eve of the Super Bowl on February 3, 2008, Joyent promised Twitter that additional computing resources would be provided free of charge during the game to cope with the peak flow [32]. But the weird thing is that with less than 4 days left before the big game, Twitter suddenly stopped using Joyent's cloud computing platform on January 30 10 o'clock in the evening to turn to Netcraft [33,34].

Twitter abandoned joyent, cast Netcraft, the reason behind it is business entanglements, or worry about joyent service is not reliable, so far is still a mystery.

It is a good idea to change to buy for rent and deal with Flood peak. But how to use the rented computing resources is a big problem. Looking at [35], it's not hard to find that Twitter uses most of the leased computing resources to increase the Apache Web Server, and Apache is the most cutting-edge aspect of the entire Twitter system.

Why does Twitter rarely allocate leased computing resources to mongrel Rails server,memcached servers,varnish HTTP accelerators and other links? Before we answer this question, let's review the last chapter, "Data flow and control flow," from the 6 steps that Twitter has written to read.

The first 2 steps of the 6 step said that each browser that visits the Twitter site maintains a long connection to the site. The aim is to push new text messages to his readers within 500ms if someone publishes a new text message. The problem is that when there is no update, each long connection occupies an Apache process, and the process is in an empty loop. Therefore, most of the Apache process, in the vast majority of time, is in the empty loop, thus taking up a lot of resources.

In fact, the Apache WEB servers traffic, though it accounts for only 10%-20% of Twitter's total traffic, is taking up 50% of the resources of Twitter's entire server cluster [16]. So, from an outsider's point of view, Twitter is bound to oust Apache in the future. But at present, when Twitter allocates computing resources, it has to be a priority to ensure Apache needs.

Being forced is only one reason, and on the other hand, it shows that Twitter's engineers are too confident about the rest of their system.

In the fourth chapter, "The need for isolation from the flood", we have used an analogy, "during the dinner rush, restaurants are often full." For new customers, the restaurant attendants are not shut out but let these customers wait in the lounge. For the Twitter system, Apache's role is the lounge. As long as the lounge is large enough to temporarily stabilize the user, in other words, is not to allow users to receive HTTP-503 error prompts.

After stabilizing the user, the next job is to provide services efficiently. Efficient service, reflected in the 6 steps of the Twitter business process in the 4 step. Why is Twitter so confident about these 4 steps?

Reference,

[] Updating Twitter without service disruptions.
(http://gojko.net/2009/03/16/qcon-london-2009-upgrading-twitter-without-service-disruptions/)
[[] Giants and Patriots draws 97.5 technologists US audience to the Super Bowl. (http://www.reuters.com/article/topNews/idUSN0420266320080204)
[To] Twitter traffic during Super Bowl 2008.
(http://blog.twitter.com/2008/02/highlights-from-superbowl-sunday.html)
[Joyent] provides Twitter free extra capacity during the Super Bowl 2008.
(http://blog.twitter.com/2008/01/happy-happy-joyent.html)
[A] Twitter stopped using Joyent ' s cloud at 10PM, 30, 2008. (http://www.joyent.com/joyeurblog/2008/01/31/twitter-and-joyent-update/)
The hasty divorce for Twitter and Joyent.
(http://www.datacenterknowledge.com/archives/2008/01/31/hasty-divorce-for-twitter-joyent/)
[] The usage of Netcraft by Twitter.
(http://toolbar.netcraft.com/site_report?url=http://twitter.com)

"7" as a progressive not thorough

Not a thorough way of working is a step forward for architecture design.

When a user from a browser requests to reach the Twitter backend, the first one to greet it is the Apache Web Server. The second exit is Mongrel Rails Server. Mongrel is responsible for handling requests for uploads as well as for downloading requests. Mongrel's business logic for uploading and downloading is very concise, but beneath the surface of simplicity, it contains unconventional design. This unconventional design, of course, is not the result of negligence, in fact, this is the most noteworthy highlight of the Twitter architecture.

Figure 9. Twitter Internal flows
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/fe8f_4095392354_66bd4bcc30_o.png
The so-called upload, refers to the user wrote a new text message, passed to Twitter for publication. The download, refers to Twitter to update the reader's homepage, add the latest text message. The way Twitter downloads is not a way for readers to make unsolicited requests, but rather the way the Twitter server proactively pull new content to readers. First see upload, mongrel processing upload logic is very concise, in two steps.

1. When mongrel received a new message, assign a new SMS ID. The ID of the new SMS, along with the author ID, is then cached into the vector memcached server. Next, the SMS ID and text are cached into the row memcached server. The two cached contents are automatically stored in the MySQL database by vector memcached and row memcached at the appropriate time.

2. Mongrel in the Kestrel Message Queuing server, look for each reader and author's message queue, and if not, create a new queue. Next, mongrel the ID of the new message into the queue of all the online readers of the author, and the author himself.

Savor these two steps, feeling that mongrel's work is not complete. One, the message and its associated IDs, cached in vector memcached and row cached is all right, and not directly responsible for the content in the MySQL database. Second, the SMS ID thrown into the Kestrel message queue, announced the end of the upload task. Mongrel no way to inform the author that his text message has been uploaded. and whether or not readers can read new messages.

Why does Twitter take this unconventional, not-so-thorough way of working? Before you answer this question, you may want to take a look at the logic of mongrel processing downloads. Connect the two logic of upload and download, compare, help to understand. Mongrel download logic is also very simple, but also in two steps.

1. The ID of the new SMS is obtained from the Kestrel message queue of the author and reader respectively.

2. Get the text message from the row memcached cache. and get readers and authors ' home pages from page memcached, and update these home pages to add the text of a new text message. And then through Apache,push to readers and authors.

Control mongrel processing upload and download of the two logic, it is not difficult to find every logic is "not complete", together to form a complete process. The so-called incomplete work style, reflects the Twitter architecture design of the two "points" concept. One, a complete business process, split into a few pieces of relatively independent work, each work by the same machine in different processes are responsible, even by different machines. Second, the collaboration between multiple machines is refined into the transfer of data and control commands, emphasizing the separation of data flow and control flow.

Splitting business processes is not the initiative of Twitter. In fact, the three-paragraph structure, the purpose is to split the process. The WEB Server is responsible for HTTP parsing, creator server is responsible for the business logic, and the database is responsible for data storage. By adhering to this tenet, the business logic of creator server can be further segmented.

In 1996, John Ousterhout, a former professor at Berkeley, who invented the Tcl language, made a keynote speech at the USENIX Conference titled "Why in most cases multithreading is a bad design [36]". Eric Brewer and his students, a professor at Berkeley University in 2003, published an article entitled Why event-driven is a bad design for high concurrent servers [37]. These two Berkeley colleagues, Tongshicaoge, what are they arguing about?

Multi-threading, simply speaking, is a thread that is responsible for a complete business process from beginning to end. For example, it's like a garage master who repairs a car. The so-called event-driven, refers to a complete business process, split into several independent work, each work by one or several threads responsible. For example, like an assembly line in a car factory, there are multiple workstations, each of which is held by one or several workers.

Clearly, Twitter's approach belongs to the event-driven faction. The benefit of event-driven is the dynamic invocation of resources. The event-driven architecture can easily mobilize more resources to defuse stress when the workload of a particular task becomes a bottleneck in the process. For a single machine, the difference in performance between multithreaded and event-driven design is not obvious. But for distributed systems, event-driven dominance is more vividly played.

Twitter split the business process two times. One, the separation of mongrel and MySQL database, mongrel not directly involved in the MySQL database operation, but entrusted memcached solely responsible. Second, the two logic of uploading and downloading is separated, and the control instruction is passed between two logic kestrel queues.

In the debate between John Ousterhout and Eric Brewer, two professors, there is no explicit question of separating data streams from control flows. The so-called event includes both the control signal and the data itself. Considering the large size of the data and the high transmission cost, the control signal size is small and the transmission is simple. Separating the data flow from the control flow can further improve the system efficiency.

In the Twitter system, the Kestrel message queue is designed to transmit control signals, the so-called control signals, which are actually IDs. And the data is the text message, stored in the row memcached. Who to deal with this text message, by Kestrel to inform.

The average time that Twitter completes the entire business process is 500ms, and can even be raised to 200-300ms, indicating that the event-driven design is successful in the Twitter distributed system.

Kestrel message queues are developed by Twitter. There are many open source implementations for Message Queuing, why does Twitter bother to develop it without off-the-shelf free tools?

Reference,

[Why] threads are (for most purposes), 1996.
(http://www.stanford.edu/class/cs240/readings/threads-bad-usenix96.pdf)
[Notoginseng] Why events are (for high-concurrency servers), 2003.
(http://www.cs.berkeley.edu/~brewer/papers/threads-hotos-2003.pdf)

"8."

Beijing Xizhimen overpass design, often people criticized. Objectively speaking, for an overpass, can extend in all directions, it is basically complete the task. The main reason for the criticisms is that the route is too complicated.

Of course, from the designer's point of view, they need to take a holistic view of constraints from all sides. But considering the world's overpass everywhere, each has its own difficulties, however, such as the xizhimen overpass so confusing, it is rare. Therefore, for Xizhimen overpass designers, the difficulty is objective, but the improvement of the space is always there.

Figure 10. Beijing Xizhimen Overpass Road
Courtesy Http://alibuybuy-img1011.stor.sinaapp.com/2010/11/ef82_4113112287_86cfb1cffd_o.png

The architectural design of large-scale web site is the same, follow the traditional design, worry and effort, but the price is the performance of the website. Web site performance is not good, the user experience is not good. The big web sites such as Twitter are able to soar, not only the design of functions to meet the needs of the Times, while technical excellence is also a necessary guarantee of success.

For example, from mongrel to memcached, a data transfer channel is required. Or, strictly speaking, a client library communicating to the memcached server. The Twitter engineers first implemented a channel with Ruby. Later, a faster channel was implemented with C. Then, constantly improve the details and continuously improve the efficiency of data transmission. This series of improvements allows Twitter to run at a speed of 3.23 requests per second, from the time it was not set up, to handle 139.03 requests per second, see Figure 11. This data channel, now named Libmemcached, is open source project [38].

Figure 11. Evolving from a Ruby memcached client to a C-client with optimised hashing. These changes increases Twitter configured from 3.23 requests per second without,

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.