Customer-Server programming method and Client Server programming

Source: Internet
Author: User
Tags unix domain socket

Customer-Server programming method and Client Server programming
Customer-Server programming method

The Client Server programming method is thoroughly described in the first volume of unix network programming. This article omitted the coding details and presented them through pseudocode. It mainly introduces the ideas of various methods;

The example is a classic TCP Echo program:
The client initiates a connection request and sends a string of data after the connection. After receiving the data from the server, the client outputs the data to the terminal;
After receiving the data from the client, the server returns it to the client as it is;

Client pseudocode:

Sockfd = socket (AF_INET, SOCK_STREAM, 0); // connect to the server (sockfd); // after the connection is established, read data from the terminal and send it to the server; // After receiving data from the server, write it back to the terminal while (fgets (sendline, MAXLINE, fileHandler )! = NULL) {writen (sockfd, sendline, strlen (sendline); if (readline (sockfd, recvline, MAXLINE) = 0) {cout <"recive over! ";}Fputs (recvline, stdout );}

The following describes the development paradigm in which the server program processes Multiple customer requests;

Multi-process Processing

For Multiple customer requests, the server creates a new process in fork mode for processing;

Process:

Server pseudocode:

ListenFd = socket (AF_INET, SOCK_STREAM, 0); bind (listenFd, addR); listen (listenFD); while (true) {// the server is blocked here and waits for new customers to connect to connfd = accept (listenfd); if (fork () = 0) {// sub-process close (listenfd ); while (n = read (connfd, buf, MAXLINE)> 0) {writen (connfd, buf) ;}} close (connfd );}

This method is easy to develop, but for the operating system, the process is an expensive resource, and each new customer request is processed by a process with a high overhead;
This method is applicable to applications with a small number of customer requests;

Pre-allocated process pool, maximum lock protection by accept

In the previous method, each customer creates a process to process the request, and then releases the request;
Uninterrupted creation and termination of processes waste system resources;
Use the process pool to pre-allocate processes and reuse them to reduce system consumption and wait time resulting from repeated process creation;

Advantage: eliminates the overhead for creating processes when new customer requests arrive;
Disadvantage: estimate the number of customer requests in advance (determine the size of the Process pool)

Systems originating from the Berkeley kernel have the following features:
All derived sub-processes call accep () to listen to the same socket and sleep when no user requests exist;
When new customer requests arrive, all customers are awakened. The kernel selects a process to process the requests, and the remaining processes are transferred to sleep (back to the process pool );

This feature allows the operating system to control the process allocation;
The kernel scheduling algorithm evenly distributes connection requests to various processes;

Process:

Server pseudocode:

ListenFd = socket (AF_INET, SOCK_STREAM, 0); bind (listenFd, addR); listen (listenFD); for (int I = 0; I <children; I ++) {if (fork () = 0) {// sub-process while (true) {// all sub-processes listen to the same socket, wait for the user request int connfd = accept (listenfd); close (listenfd); // after the connection is established, the connection while (n = read (connfd, buf, MAXLINE)> 0) {writen (connfd, buf) ;}close (connfd );}}}

How do I retrieve processes from the process pool?
All processes are blocked and waited through accept (). After the connection request arrives, the kernel selects a process from all the waiting processes for processing;

How can I put the processed process back into the pool?
After the sub-process processes the customer request, it blocks again in accpet () and waits for a new connection request through an infinite loop;

Note:Multi-process accept () blocking can cause a "surprise group problem": although only one process gets a connection, all processes are awakened; this method of waking up too many processes every time a connection is ready will lead to impaired performance;

Pre-allocated process pool, accept lock (file lock, thread lock)

The above unlocked implementations have the portability problem (only on the kernel system originating from Berkeley) and the shocking group problem,
A more general approach is to lock the accept; that is, to avoid blocking multiple processes on accpet calls, but all are blocked in the function for obtaining the lock;

Server pseudocode:

ListenFd = socket (AF_INET, SOCK_STREAM, 0); bind (listenFd, addR); listen (listenFD); for (int I = 0; I <children; I ++) {if (fork () = 0) {while (true) {my_lock_wait (); // get the lock int connfd = accept (listenfd); my_lock_release (); // release lock close (listenfd); while (n = read (connfd, buf, MAXLINE)> 0) {writen (connfd, buf );} close (connfd );}}}

You can use files and threads to lock the lock;

  • File Locking can be transplanted to all operating systems, but it may be time-consuming to perform file system operations;
  • The thread locking method not only applies to locks between different threads, but also to locks between different processes;

For details about locking encoding, see Chapter 30th of network programming;

Pre-allocate the process pool and pass the descriptor;

Different from the previous process's own accept to receive listening requests, this method is to uniformly receive accpet () user requests in the parent process. After the connection is established, the connection descriptor is passed to the child process;

Process:

Server pseudocode:

ListenFd = socket (AF_INET, SOCK_STREAM, 0); bind (listenFd, addR); listen (listenFD); // pre-establish a sub-process pool for (int I = 0; I <children; I ++) {// create a byte stream Pipeline Using a Unix domain socket to pass the descriptor socketpair (AF_LOCAL, SOCK_STREAM, 0, sockfd); if (fork () = 0) {// pre-create a sub-process // The sub-process byte streams to the parent process dup2 (sockfd [1], STDERR_FILENO); close (listenfd); while (true) {// receive the connection descriptor if (read_fd (STDERR_FILENO, & connfd) = 0) {; continue ;}while (n = read (connfd, buf, MAXLINE)> 0) {// process user request writen (connfd, buf);} close (connfd); // notify the parent process that the process has been processed. The process can return to the process pool write (STDERR_FILENO ,"", 1) ;}}while (true) {// listen to the listen socket Descriptor and all sub-process descriptor select (maxfd + 1, & rset, NULL ); if (FD_ISSET (listenfd, & rset) {// a customer connection request connfd = accept (listenfd ); // receive client connection // find an idle sub-process for (int I = 0; I <children; I ++) from the process pool) {if (child_status [I] = 0) break;} child_status [I] = 1; // The child process is allocated write_fd (childfd [I], connfd) from the process pool ); // pass the descriptor to the sub-process close (connfd);} // check the sub-process descriptor. If data exists, the sub-process request has been processed, recycle to process pool for (int I = 0; I <children; I ++) {if (FD_ISSET (childfd [I], & rset )) {if (read (childfd [I])> 0) {child_status [I] = 0 ;}}}}
Multithreading

Creating a thread for each user is much faster than creating a process for each user;

Process:

Server pseudocode:

ListenFd = socket (AF_INET, SOCK_STREAM, 0); bind (listenFd, addR); listen (listenFD); while (true) {connfd = accept (listenfd ); // after the connection is established, create a new thread to process the specific user request pthread_create (& tid, NULL, & do_function, (void *) connfd); close (connfd );} -------------------- // specific user request processing function (subthread subject) void * do_function (void * connfd) {pthread_detach (pthread_self (); while (n = read (connfd, buf, MAXLINE)> 0) {writen (connfd, buf); close (int) connfd );}
Creates a thread pool in advance. Each thread has its own accept

Process:

ListenFd = socket (AF_INET, SOCK_STREAM, 0); bind (listenFd, addR); listen (listenFD); // create a thread pool in advance, send the listener descriptor to each newly created thread for (int I = 0; I <threadnum; I ++) {pthread_create (& tid [I], NULL, & thread_function, (void *) connfd);} ------------------ // The specific user request processing // The lock ensures that only one thread is blocked at any time and waits for the arrival of new users on the accept; all other threads are // waiting for the lock; void * thread_function (void * connfd) {while (true) {pthread_mutex_lock (& mlock ); // thread lock connfd = accept (listenfd); pthread_mutex_unlock (& mlock); // thread unlock while (n = read (connfd, buf, MAXLINE)> 0) {writen (connfd, buf); close (connfd );}}

When using a Unix system from the Berkeley kernel, we do not have to lock the call to accept,
After removing the lock steps, we found that the unlocked user time was reduced (because the lock was completed by the thread function executed in the user space ), however, the system time has increased a lot (every time an accept arrives, all threads become awakened, leading to kernel alarms, which are completed in the thread kernel space );
Our threads need to be mutually exclusive, so that the kernel execution and dispatch will not allow itself to quickly lock the slave;

There is no need to use file lock here, because multiple threads in a single process can always achieve the same goal through the thread mutex lock; (the file lock is slower)

Create a thread pool in advance, and pass the descriptor after the master thread accept.

Process:

There are two ways to wait for the activation condition: pthread_cond_signal () to activate a thread waiting for the condition, one of which is activated in the queue order when there are multiple waiting threads; and pthread_cond_broadcast () activate all the waiting threads.

Note: In general applications, condition variables must be used together with mutex lock;
Before calling pthread_cond_wait (), the current thread must lock (pthread_mutex_lock (). Before the Update Conditions wait for the queue, the mutex remains locked and is unlocked before the thread is suspended and waiting. Before the condition is met and pthread_cond_wait () is left, mutex is re-locked to match the lock action before pthread_cond_wait.

Server pseudocode:

ListenFd = socket (AF_INET, SOCK_STREAM, 0); bind (listenFd, addR); listen (listenFD); for (int I = 0; I <threadnum; I ++) {pthread_create (& tid [I], NULL, & thread_function, (void *) connfd) ;}while (true) {connfd = accept (listenfd); pthread_mutex_lock (& mlock ); // lock the thread childfd [iput] = connfd; // put the descriptor handle into the array and send it to the thread that obtains the lock; if (++ iput = MAX_THREAD_NUM) iput = 0; if (iput = iget) err_quit ("thread num not enuough! "); Pthread_cond_signal (& clifd_cond); // send a signal to wake up a sleep thread (one of the polling wake-up threads) pthread_mutex_unlock (& mlock ); // thread unlock} ------------------ void * thread_function (void * connfd) {while (true) {pthread_mutex_lock (& mlock); // when no connection handle is received, sleep on the condition variable and release the mlock lock // After the condition is met, re-apply the mlock lock while (iget = iput) pthread_cond_wait (& clifd_cond, & mlock ); connfd = childfd [iget]; if (++ iget = MAX_THREAD_NUM) iget = 0; pthread_mutex_unlock (& mlock ); // thread unlock // process user request while (n = read (connfd, buf, MAXLINE)> 0) {writen (connfd, buf); close (connfd );}}

Tests show that the version of the server is slower than the version of accpet of each thread, because the version requires mutex lock and conditional variable, while the previous version only requires mutex lock;

What is the difference between the transfer of thread descriptors and the transfer of process descriptors?
The descriptor opened in a process is visible to all threads in the process, and the reference count is 1;
All threads access this descriptor only through a descriptor value (integer;
The descriptor transmission between processes transmits the reference of the descriptor; (for example, if a file is opened by two processes, the reference count of the corresponding file descriptor increases by 2 );

Summary
  • When the system load is light, it is sufficient for each user request to derive a traditional Concurrent Server Model for which sub-processes serve;
  • Compared with the traditional one-time fork method for each customer, pre-creation of a sub-process pool or thread pool can reduce the cpu control time of the process by more than 10 times; of course, the program will be more complex, monitor the number of sub-processes and increase or decrease the process pool as the number of users changes dynamically;
  • It is usually easier and faster to allow all sub-processes or threads to call accept by themselves than to allow the parent process or the main thread to call accpet concurrent descriptor to pass to the sub-process or thread;
  • Threads are usually faster than processes;
References

Unix Network Programming volume 1 Socket network APIs

Posted by: Large CC | 05APR, 2015
Blog: blog.me115.com [subscription]
Weibo: Sina Weibo

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.