Perhaps a lot of people still do not know, in the scale is second only to Baidu post-bar and watercress Chinese Internet the largest UGC (user-generated content) community. Three years since the start of business, from 0 onwards, to now has more than 100 servers. Currently, more than 11 million registered users, more than 80 million people per month, the site of PV more than 220 million per month, almost every second of dynamic requests more than 2500.
At the Archsummit Beijing 2014 conference, the co-founder and CTO Li Shenshen brought together the first comprehensive technology sharing (slideshow download) for more than three years. This article is organized according to the content of the lecture.
Initial Architecture Selection
When the product was really started in October 2010, including Li Shenshen, there were initially only two engineers, and by December 2010, four engineers were on-line.
The main development language of the knowledge is Python. Because Python is simple and powerful, it is quick to get started, development is efficient, and the community is active, and team members prefer it.
The tornado framework is used. Because it supports asynchrony, it's ideal for real-time comet applications, and it's simple, lightweight, low-learning, and a friendfeed case, Facebook's community support. Knowledge of the product has a feature, is the desire to establish a long connection with the browser to facilitate real-time push feed and notifications, so tornado more appropriate.
At first, the whole team focused on the development of product function, while other aspects, basically can save time, can save the simplest way to solve, of course, this in the late also brought some problems.
The initial idea was to use a cloud mainframe to save costs. The first server that is known is the Linode host for 512MB memory . However, after the site on-line, in-beta popularity exceeded expectations, many users feedback site is slow. Cross-border network latency is larger than imagined, especially in the domestic network imbalance, the situation of users throughout the country are not quite the same. This problem, coupled with the time to do domain name filing, it is back to buy their own machine to find the old road.
Bought the machine, found the computer room and encountered a new problem, service often down. At that time, the service provider's machine memory is always a problem and is restarted. Finally, one time the machine was down, and then it did the high availability of the web and the database . Entrepreneurship is a situation where you never know what problems you will face when you wake up in the morning.
This is the architecture diagram at that stage, and the Web and database are both master and slave. The image service was hosted on the cloud. In addition to the master and slave, in order to better performance also do a read and write separation. In order to solve the synchronization problem, a server was added to run the offline script to avoid the delay in responding to the online service. In addition, in order to improve the network throughput delay, but also replaced the equipment, so that the overall network throughput doubled 20 times times.
In the first half of 2011, it was known to be very dependent on redis. In addition to the first queue, search in use, and later like the cache also began to use, stand-alone storage as a bottleneck, so the introduction of shards, and consistency.
The team is a team that believes in tools and believes that tools can improve efficiency. The tool is actually a process, and the tool does not have the best tools available, only the most suitable tool. and it is in the whole process, with the change of the whole state, the change of the environment is changing constantly. The tools you develop or use include profiling (function-level tracking requests, analysis tuning), Werkzeug (Tools for debugging), Puppet (Configuration Management), and Shipit (one-click-Online or rollback).
Log System
The first is the invitation system, the second half of 2011, know that the application registration, no invitation code users can also fill in some information to apply for registration. User volume went up a step, then there are some advertising accounts, need to sweep the ads. The requirements of the log system are put on the agenda.
This logging system must support distributed collection, centralized storage, real-time, subscription, and simple features. some open-source systems were investigated, such as Scribe, but the subscription was not supported. Kafka is developed in Scala, but the team has less in Scala, and Flume is similar, and more heavy. So the development team chose to develop a log system-Kids (Kids is Data Stream). As the name implies, kids is used to assemble a variety of data streams.
Kids reference to Scribe's ideas. Kdis can be configured as an agent or server on each server. The agent directly accepts messages from the application, and after the message is aggregated, it can be called to the next agent or directly to the central server. When you subscribe to a log, you can get it from the server or from some agents on the hub node.
The specifics are as follows:
It also made a Web gadget (Kids Explorer) based on Kids, which supports real-time viewing of online logs and is now the most important tool for debugging online problems.
Kids has been open source and put on Github .
Event-Driven architecture
Know that this product has a feature, the earliest after adding an answer, the follow-up operation actually only update notification, update dynamic. However, as the whole function of the increase, but also a number of update index, update count, content review and other operations, subsequent operations are various. If you follow the traditional approach, the maintenance logic will become larger and the maintenance will be very poor. This scenario is well suited for event-driven, so the development team has tuned the entire architecture and made an event-driven architecture.
The first thing you need is a message queue, it should be able to get a variety of events, but also a high level of consistency requirements. In response to this demand, the development of a small tool called sink. When it gets the message, it makes a local save, persists, and then distributes the message. If that machine hangs up, it can be fully restored at reboot to ensure that the message is not lost. It then uses the Miller development framework to put messages into the task queue. Sink is more like a serial messaging subscription service, but the task needs to be parallelized, and beanstalkd comes in handy, with its full-cycle management of the task. The schema looks like this:
For example, if a user answers a question now, the system first writes the problem to MySQL, plugs the message into sink, and then returns the problem to the user. Sink sent the task to Beanstalkd,worker by Miller to find the task and handle it.
At the beginning of the launch, there were 10 messages per second, followed by 70 quests. Now there are 100 events per second, with 1500 tasks generated, supported by the current event-driven architecture.
Page Rendering Optimization
With millions of PV per day in the 2013, page rendering is computationally intensive, and there are IO-intensive features to get data. The development team then made the component of the page and upgraded the data acquisition mechanism. According to the structure of the entire page component tree, the top-down hierarchical access to data, when the upper layer of data has been obtained, the lower level of the data will not need to go down, there are several layers basically several data acquisition.
Combined with this idea, I know that I have made a set of template rendering development framework-Zhihunode.
After a series of improvements, the performance of the page has been greatly improved. The problem page reduced from 500ms to 150ms,feed page from 1s to 600ms.
Service-Oriented Architecture (SOA)
As the function of knowledge becomes more and more complex, the whole system becomes larger and bigger. How do you know how to make a service?
First, a basic RPC framework is required, and the RPC framework has evolved over several editions.
The first version is wish, which is a strictly defined serialization model. The transport layer uses STP, which is a very simple transmission protocol written by itself, running on TCP. It was good to start with, because only one or two services were written at the beginning. But as the service grows, some problems begin to arise, first of all protocolbuffer will generate some description code, very lengthy, and put in the whole library is ugly. Another strict definition makes it inconvenient to use. A new RPC framework--snow was developed by an engineer. It uses simple JSON to do data serialization. But the problem with loose data definitions is that, for example, services are going to be upgraded, data structures are rewritten, it is difficult to know which services are being used, and it is difficult to notify them, often errors occur. So again out of the third RPC framework, write RPC Framework engineer, hope that combined with the characteristics of the previous two frames, first to keep snow simple, followed by a relatively strict serialization protocol. This version introduced the Apache Avro. At the same time, added a special mechanism, in the Transport Layer and serialization protocol this layer has been made pluggable, can either use JSON, or can use Avro, the transport layer can be used STP, can also be used binary protocol.
And then a service registration found that simply define the name of the service to find the service on which machine. At the same time, it also has the corresponding tuning tools, based on Zipkin developed its own tracing system.
Depending on the invocation relationship, the service is divided into 3 tiers: The aggregation layer, the content layer, and the base layer . By attribute can be divided into 3 categories: Data Services, logical services, and channel services. Data services are primarily some types of storage that are made to do special data, compared to slice services. Logical services are more CPU intensive, computationally intensive operations, such as definition of the answer format, parsing, and so on. Channel service is characterized by no storage, more is to do a forwarding, such as sink.
This is the overall architecture after the introduction of service.
The presentation also introduced new practices based on the ANGULARJS development of the column. We will post a presentation video on the website, so please look forward to it.
From 0 to 100--know the history of architecture change