Two Server Architectures

Source: Internet
Author: User

Today, I talked to people over the phone for a long time about technical issues. They are from Wolong studio in Chengdu. The technology should be very good, but we have different opinions on several issues.

A single thread cannot form tens of thousands of concurrent jobs.

This problem is first of all an incorrect argument. Let's change the argument: Can nginx achieve tens of thousands of concurrent connections.

The two problems make the same mistake.

All concurrency is for business processing, and the business is simple. For example, for some static web pages or non-I/O transactions, the single line and nginx can achieve tens of thousands of concurrency.

However, this brother still cannot imagine that a single thread can reach tens of thousands of concurrent jobs.

The previous single-thread server blog can achieve 2 W concurrency. In addition, the nginx single-process mode is actually a single-thread mode, and the processing of static Web pages can easily reach several W concurrency.

Another problem is the server architecture.

The server architecture is nothing more than two types:

The single-thread event model stores events and transactions in the same thread for processing and uses multi-process load balancing, such as nginx.

Multi-threaded model that separates events from transactions, or events.

You can say which of the above models is better or worse.

People who have been engaged in server development for a long time, a variety of such situations, I often find that they like self-satisfaction. For example, the last time a friend insisted that nginx is better than Apache, I also learned a lot about nginx from the event model, cache, and architecture. As a result, I asked, have you read Apache code, how much do you know about Apache.

If some friends insist that nginx is better than Apache, it is nothing more than because the nginx event model can be reused. Have you ever learned about load, scalability, and dynamic module processing, controversial friends can talk about it in comments.

Some people often use multi-threaded models, so they feel better than a single thread. They think that a single-threaded server is very low-level and sneer at a single-threaded server, but they are not convincing, I don't think people should stay at an established level of understanding, but should go deep into exploration, rather than work for work.

In this case, Wolong's game server must be a multi-threaded model.

Back to the topic, I drew a picture in the previous architecture blog, in which the Earth-yellow pipes represent high-performance modules and the red pipes represent low-performance modules. This figure can clarify what is the bottleneck, now let's take another data example:

The processing capacity of high-performance modules is 1 W per second, and that of low-performance modules is 1 k per second,

If you want to make the server's processing capacity reach 1 W, first talk about the second model, there must be at least 10 threads at the business layer,

If the first model is used, the load of each process is reduced evenly by means of load sharing, and 10 processes can be used,

In this case, the two models can do the same thing. Can they say that the two models are both superior and inferior? They can only say which model is more familiar to everyone? Maybe the second model involves thread synchronization and mutual exclusion, then the memory allocation and management should be more complex. In terms of architecture, we should try to avoid this practice, because it will be extended from both the development cycle and the test cycle, it is easy to fall into some complex application scenarios.

In fact, I have experienced both models. I liked the second one in the past few years. I have designed some multithreading technical solutions, such as the memory pool. I used to think these two models are very good, since the first solution was used in the previous project, it has been found that there are many advantages of a single thread, and the division of labor for development is much simpler, architecture logic, process control, resizing management, and disaster recovery are much simpler.

In this case, it is not unreasonable for nginx to choose such a pure thread.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.