Three implementation models of concurrent servers in Linux environment

Source: Internet
Author: User

Server Design technology has many, according to the protocol used to divide the TCP server and UDP server. The loop server and the concurrent server are divided by processing mode.

1 loop server and concurrent server model

In the network program, in general, many customers correspond to a server, in order to deal with customer requests, the service side of the program has put forward special requirements.

Currently the most commonly used server models are:

• Loop server: The server can only respond to requests from one client at a time

• Concurrent servers: Servers can respond to requests from multiple clients at the same time

1.1 UDP Loop Server implementation method:

Each time a UDP loop server reads a client's request from a socket, and then returns the result to the client.

Because UDP is non-connection-oriented, no client can always occupy the server. As long as the process is not a dead loop, the server is always satisfied with each client's request.

The UDP loop server model is:

Socket (...);

Bind (...);

while (1)
{

Recvfrom (...);

Process (...);

SendTo (...);
}

Implementation method of 1.2 TCP loop server

The TCP loop server accepts a client connection and then processes, disconnects after all requests for this client have been completed. The TCP loop server can only process requests from one client at a time, and the server may continue with subsequent requests only after all requests from the client are satisfied. If there is a client that does not hold the server, the other clients do not work, so the TCP server is generally rarely used in the loop server model.

The TCP Loop server model is:

Socket (...);

Bind (...);

Listen (...);

while (1)
{

Accept (...);

Process (...);

Close (...);
}

23 Methods of concurrent server implementation

A good server, usually a concurrent server. Concurrent server design techniques generally include: multi-process servers, multi-threaded servers, I/O multiplexing servers, etc.

2.1 Multi-process concurrent servers

There are many applications in the Linux environment, the most important of which is the network/client server. A multi-process server is when a client has a request, the server uses a subprocess to process the customer request. The parent process continues to wait for requests from other customers. The advantage of this approach is that when the customer has a request, the server can process the customer in time, especially in the client server interaction system. For a TCP server, the client's connection to the server may not be shut down immediately, it may wait until the customer submits some data and then shut down, and the server-side process is blocked, so the operating system may dispatch other customer service processes. Service performance is greatly improved compared to the loop server.

TCP multi-process concurrent server

The idea of a TCP concurrent server is that each client's request is not processed directly by the server, but rather by the server creating a subprocess to process.

Socket (...);

Bind (...);

Listen (...);

while (1)
{

Accpet (...);

if (fork (...) = = 0)
{

Process (...);

Close (...);

Exit (...);
}
Close (...);
}

2.2 Multi-Threaded server

Multi-threaded server is an improvement to multi-process server, because the multi-process server consumes the large system resources when creating the process, so the service handlers can be created faster by replacing the process with the thread. According to statistics, creating a thread is 10,100 times times faster than creating a process, so the thread is called a "lightweight" process. Threads are different from processes: all threads within a process share the same global memory, global variables, and so on. This mechanism also brings about synchronization problems. The following is a multithreaded server template:

Socket (...);

Bind (...);

Listen (...);

while (1)
{

Accpet (...);

if ((Pthread_create (...))! ==-1)
{

Process (...);

Close (...);

Exit (...);
}
Close (...);
}

2.3 I/O multiplexing server

The I/O multiplexing technique is designed to address the technology that occurs when a process or thread blocks to an I/O system call, so that the process does not block a particular I/O system call. It can also be used for concurrent server design, commonly used functions Select or poll to implement.

Socket (...);

Bind (...);

Listen (...);

while (1)
{

if (select (...) >0)

if (Fd_isset (...) >0)

{

Accpet (...);

Process (...);

}

Close (...);
}

The above are TCP server-side programs, TCP client programs can be common:

Socket (...);

Connect (...);

Listen (...);

Process (...);

Close (...);

Transferred from: http://www.cnblogs.com/lchb/articles/2749354.html

Three implementation models of concurrent servers in Linux environment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.