Analysis and comparison of several classic network server architecture models

Source: Internet
Author: User
Tags epoll

Objective

Event-driven for the vast number of programmers familiar with the most people are talking about the application of graphical interface programming; In fact, event drivers are widely used in network programming and are deployed massively in high-throughput server programs such as HTTP server programs, FTP server programs, and so on. Compared with traditional network programming, event-driven can greatly reduce resource occupancy, increase service reception capacity, and improve network transmission efficiency.

About the server model mentioned in this paper, the search network can be consulted a lot of implementation code, so this article will not adhere to the source code display and analysis, and focus on the introduction and comparison of the model. The server model that uses the Libev event-driven library will give the implementation code.

This article relates to the thread/time legend, only to indicate that the thread does have blocking delay on each IO, but does not guarantee the correctness of the delay ratio and the correctness of the IO execution. In addition, the interface mentioned in this article is only the familiar Unix/linux interface, not recommended Windows interface, Readers can check the corresponding Windows interface themselves.

Block-type Network programming interface

Almost all programmers first come into contact with network programming from Listen (), send (), recv () and other interfaces. Using these interfaces makes it easy to build a server/client model.

We assume that we want to create a simple server program that provides a content service similar to a "one answer" to a single client.

Figure 1. Simple one-answer server/client model

We note that most socket interfaces are blocking types. A blocking interface is a system call (typically an IO interface) that does not return a call result and keeps the current thread blocked until the system call obtains a result or a time-out error.

Virtually all IO interfaces (including the socket interface) are blocking, unless specifically specified. This poses a big problem for network programming, such as when calling Send (), the thread will be blocked, during which time the thread will be unable to perform any operations or respond to any network requests. This poses a challenge to network programming for multi-client and multi-business logic. At this point, many programmers may choose the multi-threaded way to solve this problem.

Multithreaded server Programs

For multi-client network applications, the simplest solution is to use multithreading (or multiple processes) on the server side. The purpose of multithreading (or multi-process) is to have separate threads (or processes) for each connection, so that blocking of any one connection does not affect other connections.

The specific use of multi-process or multi-threading, and does not have a specific pattern. Traditionally, processes are much more expensive than threads, so if you need to serve more clients at the same time, it is not recommended to use multiple processes, and if a single service executor needs to consume more CPU resources, such as large-scale or long-time data operations or file access, the process is more secure. Typically, a new thread is created with Pthread_create (), and fork () creates a new process.

We assume that the above-mentioned server/client model requires a higher requirement for the server to provide a single-answer service to multiple clients at the same time. Therefore, the following model is available.

Figure 2: Multi-threaded Server Model

In the thread/time legend above, the main thread continues to wait for the client's connection request and, if there is a connection, creates a new thread and provides the same question and answer service in the new thread.

Many beginners may not understand why a socket can accept multiple times. In fact, the designer of the socket may have deliberately left a hint for a multi-client scenario, allowing the accept () to return a new socket. The following is the prototype of the Accept interface:

?

1 intaccept(ints,structsockaddr *addr, socklen_t *addrlen);

The input parameter s is the socket handle value that is inherited from the socket (), bind (), and listen (). After bind () and listen () are executed, the operating system begins to listen for all connection requests at the specified port and, if there is a request, joins the request queue with the connection request. Calling the Accept () interface extracts the first connection information from the request queue of the socket s, creating a new socket return handle that is similar to S. The new socket handle is the input parameter for subsequent read () and recv (). If the request queue does not currently have a request, the accept () will go into a blocking state until a request enters the queue.

The multi-threaded server model seems to be a perfect solution to the requirements for answering questions and answers for multiple clients, but not really. If you want to respond to hundreds or thousands of connection requests at the same time, no matter how many threads or processes will take up the system resources, reduce the system to the external response efficiency, and the thread and process itself more easily into the suspended animation state.

Many programmers might consider using a " thread pool " or " connection pool ." The thread pool is designed to reduce the frequency of creating and destroying threads, maintaining a reasonable number of threads, and allowing idle threads to re-assume new execution tasks. Connection pooling maintains a connected cache pool, reusing existing connections as much as possible, and reducing the frequency with which connections are created and closed. Both of these technologies can reduce system overhead and are widely used in many large systems, such as WebSphere, Tomcat, and various databases.

However, the thread pool and connection pooling techniques are only to some extent mitigated by the frequent invocation of the IO interface for resource consumption. Moreover, the so-called "pool" always has its upper limit, when the request greatly exceeds the upper limit, the "pool" composed of the system response to the outside world is not much better than when there is no pool. So using the pool must consider the scale of the response it faces and adjust the size of the pool based on the response scale.

The "thread pool" or "Connection pool" may alleviate some of the stress, but not all of them, in response to the thousands or even thousands of client requests that may appear in the previous example.

In short, multithreaded models can easily and efficiently solve small-scale service requests, but in the face of large-scale service requests, multithreaded models are not the best solution. In the next chapter we will discuss using non-blocking interfaces to try to solve this problem.

Event-driven server model using the Select () interface

Most unix/linux support the Select function, which is used to probe the state change of multiple file handles. The prototype for the Select interface is given below:

?

1 2 3 4 5 6 FD_ZERO(intfd, fd_set* fds) FD_SET(intfd, fd_set* fds) FD_ISSET(intfd, fd_set* fds) FD_CLR(intfd, fd_set* fds) intselect(intnfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds,         structtimeval *timeout)

Here, the Fd_set type can be simply understood as a queue that marks a handle by bit, for example to mark a handle with a value of 16 in a fd_set, and the 16th bit bit of the fd_set is marked as 1. Specific placement, validation can be achieved using Fd_set, Fd_isset and other macros. In the Select () function, Readfds, Writefds, and Exceptfds are both input and output parameters. If the input Readfds is marked with a number 16th handle, select () detects whether the 16th handle is readable. After select () is returned, you can determine whether the "readable" event occurs by checking if the Readfds has a 16th-number handle. In addition, the user can set timeout time.

The model that receives data from multiple clients in the previous example is re-modeled.

Figure 4. Receiving Data Model using select ()

The above model simply describes the process of receiving data from multiple clients at the same time using the Select () interface, since the Select () interface can simultaneously detect multiple handles for read, write, and error states, so it is easy to build a server system that provides independent question and answer services to multiple clients.

Figure 5 Event-driven server model using the Select () interface

It should be noted here that a connect () operation on the client will fire a "readable event" on the server side, so select () can also detect the Connect () behavior from the client.

In the above model, the most critical place is how to dynamically maintain the three parameters of select () Readfds, Writefds, and Exceptfds. As an input parameter, Readfds should mark all the handles of the "readable event" that need to be probed, which will always include the "parent" handle that probes for connect (), and Writefds and Exceptfds should mark all "writable events" and "error events" that need to be probed. Handle (using the Fd_set () tag).

As an output parameter, the handle values of all events captured by select () are saved in Readfds, Writefds, and Exceptfds. The programmer needs to check all the token bits (using the Fd_isset () check) to determine exactly which handles have occurred.

The above model mainly simulates is "a ask a reply" service flow, so if select () found a handle caught "readable event", the server program should do recv () operation in time, and according to the received data ready to send data, and the corresponding handle value added to Writefds, Prepares the next "writable event" of the Select () probe. Similarly, if select () finds a handle that snaps to a writable event, the program should do a send () operation in time and prepare for the next "readable event" probe. Describes one of the execution cycles in the above model.

Figure 6. An execution cycle

This model is characterized by the fact that each execution cycle detects one or a set of events, and a particular event triggers a specific response. We can classify this model as an " event-driven model ".

Compared to other models, the event-driven model using select () executes only single-threaded (process), consumes less resources, consumes too much CPU, and provides services to multiple clients. If you try to build a simple event-driven server program, this model has some reference value.

But the model still has a lot of problems.

First, the Select () interface is not the best choice for implementing event-driven. Because the Select () interface itself consumes a lot of time to poll each handle when the value of the handle to be probed is large. Many operating systems provide a more efficient interface, such as Linux provides the EPOLL,BSD provides the Kqueue,solaris provides the/dev/poll .... Interfaces like Epoll are recommended if you need to implement more efficient server programs. Unfortunately, the Epoll interface for different operating systems is a big difference, so using a epoll-like interface to implement a server with better cross-platform capabilities can be difficult.

Secondly, the model is a combination of event detection and event response, which is catastrophic for the entire model once the event response is large. In the following example, a large 1 of the actuator will directly lead to response to event 2 of the execution of the delay is not implemented, and to a large extent, reduce the timeliness of event detection.

Figure 7. Impact of large execution on event-driven models using select ()

Fortunately, there are many efficient event-driven libraries that can mask the above difficulties, common event-driven libraries have libevent libraries, and Libev libraries as libevent substitutes. These libraries choose the most appropriate event detection interface based on the characteristics of the operating system, and include techniques such as signaling (signal) to support asynchronous responses, making these libraries an ideal choice for building event-driven models. The following chapter describes how to use the Libev library to replace a select or Epoll interface to achieve an efficient and stable server model.

Server model using event-driven library Libev

Libev is a high performance event loop/event driven library. As an alternative to Libevent, its first version was released with November 2007. Libev's designers claim that Libev has the advantage of faster speeds, smaller volumes, more features, and these advantages are proven in many assessments. Due to its good performance, many systems are starting to use the Libev library. This chapter describes how to use Libev to implement a server that provides question and answer services.

(In fact, there are many existing event loop/event-driven libraries, and the author has no intention of recommending that the reader use the Libev library, but only to illustrate the convenience and benefits of the event-driven model for Web server programming.) Most event-driven libraries have interfaces similar to those of the Libev library, and as long as they understand the general principle, they can choose the right library flexibly. )

Similar to the model in the previous chapter, Libev also requires cyclic detection of whether the event is generated. The Libev loop body is expressed with a ev_loop structure and is started with a ev_loop ().

?

1 voidev_loop( ev_loop* loop,intflags )

Libev supports eight types of events, including IO events. An IO event is characterized by ev_io and initialized with the Ev_io_init () function:

?

1 voidev_io_init(ev_io *io, callback,intfd,intevents)

The initialization content includes the callback function callback, the detected handle FD and the event to be probed, the Ev_read table "readable event", and the Ev_write table "writable event".

Now, the only thing users need to do is to add or remove certain ev_io from Ev_loop at the right time. Once added, the next loop checks whether the event specified by Ev_io has occurred, and if the event is detected, Ev_loop automatically executes Ev_io's callback function callback (), and if Ev_io is logged off, the corresponding event is no longer detected.

You can add or remove one or more ev_io to a ev_loop, regardless of whether it is started or not, and add the removed interfaces Ev_io_start () and Ev_io_stop ().

?

1 2 void ev_io_start( ev_loop *loop, ev_io* io ) void ev_io_stop( EV_A_* )

As a result, we can easily draw the following "one-answer" server model. Because the server-side active termination connection mechanism is not considered, each connection can be maintained at any time, and the client can choose to exit the time freely.

Figure 8. Server model using the Libev library

The model above can accept any number of connections and provide a completely independent question and answer service for each connection. With the event loop/event-driven interface provided by Libev, the above model has the opportunity to have the characteristics of high efficiency, low resource occupancy, good stability and simple writing that other models cannot provide.

Because of the traditional Web server, FTP server and other network applications have "one answer" communication logic, so the above using the Libev library "one answer" model for the construction of similar server programs have a reference value; Also, for applications that require remote monitoring or remote control, The above model also provides a feasible implementation scheme.

Summarize

This article focuses on how to build a server program that provides a "one-answer", which has discussed the model implemented with the blocking socket interface, using a multithreaded model, and using the event-driven server model of the Select () interface until the server model using the Libev event-driven library is used. This paper compares the advantages and disadvantages of various models, and draws the conclusion from the comparison that using "event-driven model" can realize more efficient and stable server programs. The various models described in this paper can provide reference value for readers ' network programming.


Analysis and comparison of several classic network server architecture models

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.