Linux Network Programming--three implementation models of concurrent servers

Source: Internet
Author: User
Tags readable

There are many server design techniques, according to the protocol used to divide the TCP server and UDP server, according to the processing method to divide the loop server and the concurrent server .


cyclic server and concurrent server modelin the network program, in general, many customers correspond to a server (many to one), in order to handle customer requests, the service side of the program has put forward special requirements.


Currently the most commonly used server models are:

• Loop server: The server can only respond to requests from one client at a time

• Concurrent servers: Servers can respond to requests from multiple clients at the same time


the implementation method of UDP cyclic servereach time a UDP loop server reads a client's request from a socket, and then returns the result to the client.


Because UDP is non-connection-oriented, no client can always occupy the server. As long as the processing is not a dead loop, or the time is not very long, the server's request to each client is to some extent satisfied.


the UDP Loop server model is :

Socket (...); Create socket bind (...);   Bind while (1) {recvfrom (...);//Receive client request process (...);  Processing Request sendto (...);   Feedback processing Results}


implementation method of TCP cyclic server

The TCP Loop server accepts a client connection and then processes, disconnects after all requests for this client have been completed. The TCP Loop server can only process requests from one client at a time, and the server may continue with subsequent requests only after all requests from the client are satisfied . If There is a client that does not hold the server, the other clients do not work, so the TCP server is generally rarely used in the loop server model.


The TCP Loop server model is:

Socket (...); /create socket bind (...); /Bind Listen (...); /Listen while (1) {accept (...); /Remove the client's request to connect process (...); /processing requests, feedback results close (...); /Close Connection socket: Accept () return socket}


three methods of concurrent server implementation

A good server, typically a concurrent server (a request that can respond to multiple clients at the same time). Concurrent server design techniques generally include : multi-process servers, multi-threaded servers, I/O multiplexing servers, etc.


Multi-process Concurrent server

There are many applications in the Linux environment, the most important of which is the network/client server. A multi-process server is when a client has a request, the server uses a subprocess to process the customer request. The parent process continues to wait for requests from other customers. The advantage of this approach is that when the customer has a request, the server can process the customer in time, especially in the client server interaction system. For a TCP server, the client connection to the server may not be closed immediately, may wait until the customer submits some data and then close, this time the server side of the process will be blocked, so the operating system may dispatch other customer service processes, this is compared to the Loop server greatly improve service performance .


TCP multi-process concurrent server
The idea of a TCP concurrent server is that each client's request is not processed directly by the server, but rather by the server creating a subprocess to process.


multi-threaded server

Multi-threaded server is an improvement to multi-process server, because the multi-process server consumes the large system resources when creating the process, so the service handlers can be created faster by replacing the process with the thread. According to statistics, creating a thread is 10,100 times times faster than creating a process , so the thread is called a "lightweight" process . A thread differs from a process in that all threads within a process share the same global memory, global variables, and so on, and this mechanism brings up synchronization problems .


The following is a multithreaded server template:


I/O multiplexing serverThe I/O multiplexing technique is designed to address the technology that occurs when a process or thread blocks to an I/O system call, so that the process does not block a particular I/O system call. It can also be used for concurrent server design, commonly used functions select () or poll () to implement.

Socket (...); Create socket bind (...);   Bind Listen (...); Listen while (1) {if (select (...) > 0)//detect if the listener socket is readable {if (Fd_isset (...) >0)//socket readable, proves that there is a new client connection server  {accpet (...); /Remove the completed connection process (...); /processing request, feedback result}}close (...); Close Connection socket: Accept () return socket}


Reference: Http://blog.chinaunix.net

Linux Network Programming--three implementation models of concurrent servers

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.