2010 "Architect Solitaire" quiz--Yang Weihua vs Allen

Source: Internet
Author: User
Tags qcon

Add by Zhj: Although it was a few years ago, it is a valuable reference.

Original: http://blog.zhaojie.me/2010/05/programmer-magazine-2010-5-architect.html

Last month, the programmer magazine Manuscripts me, hoping I could take part in the May "architect Solitaire" column, and I hesitated a little bit and agreed. "Architect Solitaire" is a question-and-answer form, with one person asking each question and answering by another. The answer is the next issue of the questioner. The architect of this question is Yang Weihua, technical manager of Sina Weibo. His questions include language selection and architecture design, the choice of nosql storage schemes, and the architecture of microblog systems. Yang Weihua is a master of the domestic technical community first-class, which makes me more cautious in answering questions swim feeling. If you are interested in some issues, you may want to discuss them together.

Language selection and Architecture design

question: Many architects say that programming languages are not important, architecture design ideas are important, but most teams are very dependent on a language, and even many project leaders have a preference for a particular language and are disgusted with another. What do you think of the problem of programming language selection? At the same time, there is another phenomenon in the industry, many leading-edge technology researchers to some emerging languages such as Erlang, go, and so on to show enthusiasm, you to the team or the project to introduce these new languages hold what point of view?

answer: In my opinion, the choice of programming language is also crucial. Admittedly, the design idea of architecture directly determines the quality of the system itself. On the contrary, theoretically speaking, as long as it is a fully Turing language, there is no fundamental difference in "ability", and any work can be achieved. However, one thing cannot be overlooked, and that is the language we use, which often affects and even determines the way we think.

To cite some extreme examples, if people are still using assembly language for development, then it is estimated that the programmer's mind can never jump out of the "sub-process" level of abstraction, what object-oriented design, functional programming almost impossible to talk about. People in the production and learning process will cause a number of needs, thus need to produce some tools to support learning and production, and "language" is one such tool. Only by using high-level languages can people effectively abstract the real world into things that computers and machine boxes recognize.

Today, there are a number of mainstream languages that can be used to build a project, and sometimes it does seem that different languages-such as Ruby and python──-are not quite as distinct from each other. This is actually quite normal, because some (or even most) linguistic features do not have an impact on our "thinking".

For example, some friends who like Ruby think that the Ruby language has a very good programming experience, as its array operations can add or subtract directly:

In other words, the value of exchanging two variables in Python requires only one line of code (in most languages, intermediate variables may be needed):

But, as far as I'm concerned, these language features, though they do make programming relatively easy, such as allowing us to get a little bit less code, will not change or show another kind of programming thinking. This syntax feature, in general, can be done by building some simple helper functions to achieve a similar degree of "Simplified development" (as the above example of Ruby), for those non-"write-and-throw" program, these features are not obvious advantages.

In contrast, Ruby's mixin mechanism and Python's decorator are not just "syntactic sugars", but rather important language features, because they can bring or greatly simplify some very useful programming patterns.

But the choice of language sometimes needs to look higher. Some of the new, emerging, or newly popular languages have more profound implications for system development. For example, the Erlang language is the only thing that can not be overlooked when it comes to concurrent/parallel programming. This language provides a way to build lightweight computing units (known as "processes" in Erlang) and communicate with each other using the Send messages (message passing). This avoids a variety of issues that are prone to sharing state, and can get very powerful concurrency under its unique virtual machine implementation. However, the task scheduling mechanism of Erlang has a characteristic that it allocates the same computational power for each "process". Thus, if there are 1000 processes in the system, then each process will be able to compute one-tenth of the 100 processes. This scheduling method may not be appropriate for some types of applications because it may cause reduced throughput or even full stop service (because each task timed out) in the case of increased concurrency pressure. This feature of Erlang tends to have a direct impact on how the system is structured.

However, in some scenarios, we can also choose other languages. Scala, for example, also provides a message-passing concurrency mechanism based on the actor model. However, it is not scheduled in the same way as Erlang (in fact, it is not possible to implement Erlang scheduling mode due to the limited functionality of the platform). Because Scala's actor model is built on top of the JVM, it can only prepare a thread pool where threads are constantly handling the message's delivery and processing tasks, while additional tasks wait in the queue. As a result, Scala does not use a completely fair scheduling method like Erlang, but it can prioritize first-come tasks and ensure stable throughput.

As a result, two different scheduling mechanisms, Erlang and Scala, determine the different ways in which they fit into different scenarios or system architectures. I believe that Facebook uses Erlang to build a chat platform, and Twitter uses Scala to build a messaging middleware that has this in mind.

Of course, the scheduling method is more like a platform decision than a language decision. However, on the specific issue just now, I think the two are in fact unified. Because Erlang is both a language, it also represents a platform. While Scala is one of the many languages on the JVM platform, it can only gracefully implement the message delivery mechanism of the actor model. I always think that a language feature can only be widely accepted if it is really "useful". For example, can the actor model be implemented using the Java language? Can, but it lacks the flexible functional syntax of Scala and pattern matching, so it's impossible to build a usable, easy-to-use actor framework that no longer has to be. This is actually one of the typical cases of "language influence thinking mode".

Asynchronous and parallel are indispensable factors in today's system construction. Today's new languages are largely in this area. In addition to Scala and Erlang, in Microsoft. NET Platform for new languages F # introduces the innovative features "computational expressions (computation expression)", using a mechanism similar to monad to greatly simplify the development of asynchronous programs. The Clojure language in the JVM also introduces software transaction memory (Stm,software transactional memories). We can almost say that today every emerging language has a unique "killer" feature, which is an important factor in system development, and the use of language support can significantly reduce the difficulty of developing systems, increase maintainability and robustness, and are not easily accessible by architectural improvements.

Nowadays, "multi-language" development is becoming a trend, for example, in the various subsystems of Facebook, such as C, C++,erlang,java and other languages/platforms, and then use PHP as a binder to connect together. Twitter also uses Ruby,c,scala and Java without exception. Today's systems are becoming more complex, and virtually no tool can be fully adapted to the full development of the system, choosing the right language for the different components of the system, as well as the challenges that architects now have to face.

Unlike in the past, even after the platforms used to build the system-such as the JVM-will find that there are a lot of language choices, and different languages do have different features that can bring some special advantages. For example, with the dynamic nature of Ruby, you can easily perform unit tests. And the system's production part of the code, may be able to choose Scala and other static compiled language, in order to use a more complete static inspection tools to ensure a more stable product quality.

For the new language selection, different styles of architects will adopt different strategies. For example, a conservative architect might be interested in whether the language community is active, whether the language-related resources are rich, and whether the programmer is easy to recruit to consider language or platform choices. This is a very normal practice. But everything is balanced, and in some cases there is only one step between "conservatism" and "outworn" or "complacency".

My personal style is relatively "radical" and is happy to absorb and try something new. My suggestion is that each technical team should pick out a number of highly skilled and experienced members to extensively absorb the development of new things and make proposals to the team and production environment at the right time to improve the efficiency or quality of the system development. Guided by these senior technicians, it is often better to anticipate the impact of new technologies on the product, even if there are some problems that can be managed on their own.

As far as I know, some of the more active technical teams, especially the technical teams of some Web 2.0 products, have better practices in this regard.

Selection of NoSQL storage solutions

question: A lot of companies have been trending towards NoSQL lately, and many architects are concerned about the need to turn relational databases to NoSQL, what advice can be given to the architects who are choosing them?

Answer: My personal view is that NoSQL itself is a good thing, but slightly distorted in terms of the atmosphere used. It may be that the "pent-up" of the relational storage is too long, and now a nosql movement is making people's eyes boil.

The advent of NoSQL is not intended to completely replace the relational database, but rather to address the performance and scalability of the relational database of the shortcomings of the storage method proposed. NoSQL should not be "no SQL", the more appropriate way should be "not only SQL".

Looking at today's more successful NoSQL applications, it seems that in addition to Google's data size, resource sedimentation and other reasons, other systems are mostly using nosql as a means of optimization, rather than as the primary way to store the system, Their main use is still a relational database such as MySQL. In fact, the system is also in the evolution of architecture, the discovery of relational database becomes the bottleneck of system optimization, and to some extent, introduce the NoSQL storage mode to improve performance.

For example, shortly before, SourceForge announced that MongoDB would be introduced into the system, and Twitter intended to start using the Cassandra created by Facebook. But the scale supported by SourceForge and Twitter, now based on a relational database, is a target that countless systems cannot match. What's more, like stack overflow, which claims to be the world's largest programmer's website, only uses a single relational database as a storage backend.

After all, the performance of a relational database is not too bad to be acceptable, and the benefits of NoSQL can only be reflected when it reaches a certain scale. Moreover, there are many places in the system that can be optimized in addition to the storage method. For example, the most traditional, caching, a better caching mechanism can reduce database access by more than 95%, which has a significant impact on system performance.

Another inconvenience to using NoSQL storage today is the lack of tools. I also use MongoDB in the project, a very obvious experience is that the operation of MongoDB than the relational database is a lot of trouble. For example, when accessing a relational database, you can leverage out-of-the-box mapping tools, which have become very flexible and efficient over the years, and can handle most usage scenarios. While using MongoDB, I seem to have to go back to the original feeling of writing JDBC, even for some platforms, even a mature driver (such as a connection pool support) need to develop a hands-on. For an experienced developer, it is not difficult to write "enough" code, but it is also something that will affect the input-output ratio.

In addition, although NoSQL performance is high, but this is also by a certain extent at the expense of data integrity or consistency of the guarantee, the traditional relational database in this area put a lot of effort, such as transaction mechanism, although it will reduce performance, but ensure the consistency of the data. However, today's NoSQL storage does not provide a similar mechanism (after all, there is an unavoidable cap rule), so that when multiple related operations are interrupted (such as an exception), it is easy to create the data "this long and short" phenomenon. Moreover, today's NoSQL products, for performance reasons, almost invariably have a level of caching mechanism that does not write new or updated data directly to disk. Therefore, if you do not have a clustered environment, you are likely to experience data loss in the event of an unexpected situation. For this, as MongoDB clearly points out, it does not pay much attention to the durability of a single machine, its designers in exchange for more important parameters: performance. This means that once NoSQL is used as the primary means of storage, it is often necessary to follow up some peripheral measures synchronously, such as the possibility of devoting more effort to ensure the eventual consistency of the data.

When we decide to choose a NoSQL storage method, we must choose a particular NoSQL product based on our business characteristics. Currently, NoSQL is divided into four main categories: Bigtable,key-value, Document type, and graph database. They have their own performance advantages and scope of application. For example, Key-value storage supports a very limited number of queries, but because of its simple structure, its performance and scalability are unmatched. The document-based database, such as MongoDB, supports a very flexible query approach, with a built-in map reduce mechanism that allows you to enter JavaScript scripts for special data processing and aggregation. such as NEO4J database, because the direct support "node", "(directed) relationship" and other concepts, for some relational database, document database difficult to deal with or modeling the query or traversal way (such as the shortest path calculation), there is very direct, natural and efficient support.

In short, the architect chooses not the SQL or nosql itself, but rather the "most appropriate" thing.

Direction of attention and field selection

question: Many architects like to learn about Google, Facebook and other large-scale systems, but many architects think that most sites will not grow into a "big site". Most engineers are not able to build and maintain a GFS-like system. For most websites, it doesn't make sense to spend time on the architecture of so-called "big sites", how do you think that architects should choose the directions and areas of concern?

Answer: my view In this regard is that while the size of large systems such as google,facebook may never be accessible to the vast majority of people, their experience and measures may bring us some other experience.

For example, Map reduce is a common concept and means of functional programming, but Google combines it with other infrastructure such as GFS to become an incredibly powerful distributed computing technology. However, Map reduce itself is a very simple thing (Google map reduce the complexity of the implementation of the main or GFS), it is not proprietary to Google, we are inspired to use it elsewhere. For example, MapReduce support was built into both MongoDB and Couchdb, and Freewheel company implemented the map reduce computing mechanism in its advertising platform last year at the Qcon Beijing Conference.

As a result, even if you can't be a real giant, you can also take a look at the lessons learned during the growth of giants and get some inspiration from it. Even as an interesting story to understand, even to open the horizon. Sometimes, what we need is probably just a casual hint.

The structural difficulties of microblogging products

question: At present many internet portals in China are doing microblogging products, what do you think is the main difficulty of the micro-blogging technology architecture?

Answer: from the complexity above, the microblogging product business is relatively simple, I think it has two major elements in the technical architecture: message delivery and caching.

Weibo products are almost entirely a message distribution platform from a product's nature, so a good messaging mechanism is essential. When a user sends a message, it can be observed by many people. For a celebrity, being followed by hundreds of thousands of of users is a very common thing. At this point, it is almost impossible to expect all followers to see the message instantly, so when implemented we often need to construct a message queue that quickly dispatches the message to the queue for processing, and eventually "successively" displays the message on each follower's timeline. There is bound to be a delay, but it is not unacceptable for the quality of the business. But obviously this delay can not be too long, on Twitter, the average delay is 500 milliseconds, from the absolute value is not too short, but also enough. Twitter's approach to this is to use the Scala Actor model and Apache Mina to write a distributed message transfer Framework Kestrel, which has a fast, lightweight (including annotations of less than 2000 lines of code), durable, stable, and so on, but not transactional, The order of the messages is also not guaranteed. So, so to speak, Kestrel is a message-transfer mechanism that Twitter "customizes" itself to its own needs.

Another important point is the caching mechanisms that are necessary for each large system. Some people say that the cache is like a universal plaster, where uncomfortable and then where the rub point can be effective. There is a certain truth to this remark. Twitter, for example, has designed a relatively complex multilevel caching mechanism that caches almost every IO-intensive place. For example, the vector cache of a sequence of record IDs (vector cache), which records the row cache of specific content, such as each message (row cache). In addition, because of its large API access, only the output form from the message content to the API (possibly just some string connection operations) also consumes a lot of cost, so Twitter also designed a fragment cache (Fragment cache) for the API output form of the message. Finally, there is also a page cache for some popular pages. In addition to the page cache, other caches have a hit rate of more than 95%, which shows how important the caching mechanism is to the Twitter system. It is important to note that both the vector cache and the row cache are write through, which means that basically all new data has a copy in the cache. As Twitter's Evan Weaver said at the Qcon London 2009 conference: Everything runs from memory in Web 2.0.

Finally, for microblogging applications, it is possible that a sudden incident caused a surge in traffic, how to resist the news bombing is also an important issue. Twitter, for example, uses cloud computing as a way to deal with such problems, and it leases more computing resources when needed. However, increasing the server is only a pure hardware investment, and the architecture design can smoothly and fully utilize the new equipment is also a noteworthy place. From this point of view, an efficient distributed messaging mechanism plays an important role here. If the appropriate message mechanism is available, the message payload can be easily balanced on up to multiple servers, followed by a linear increase in response time, even in the case of increased pressure, but the throughput of the system can be maintained at a normal level.

Advice for architects of the future

question: A lot of software engineers who have worked for 2-3 years have talked about career planning as a way of looking to the architect, what advice can I give these growing engineers? How can you become an excellent architect?

Answer: In fact, I do not know how to give some effective and specific advice. I don't think an architect is a job or a responsibility, but more like a way of thinking. In fact, as long as open horizons, and constantly absorb and pay attention to the development of technology and business, to accumulate to a suitable time will be able to put forward their own ideas and suggestions for the system architecture, then you are an architect.

In fact, every programmer can be an architect.

2010 "Architect Solitaire" quiz--Yang Weihua vs Allen (EXT)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.