Client-Server Programming method

Source: Internet
Author: User

Client-Server Programming method

"UNIX Network Programming" in the first volume of client server programming method to speak thoroughly, this article will be the details of the coding omitted, through the form of pseudo-code, mainly introduce the ideas of various methods;

The example is a classic TCP-back program:
The client initiates the connection request, sends a string of data after the connection is received, and then outputs the data to the terminal after receiving the server;
The server receives the client's data and writes it back to the client;

Client Pseudo-code:

sockfd = socket(AF_INET,SOCK_STREAM,0);//与服务端建立连接connect(sockfd);//连接建立后从终端读入数据并发送到服务端;//从服务端收到数据后回写到终端while(fgets(sendline,MAXLINE,fileHandler)!= NULL){    writen(sockfd,sendline,strlen(sendline));    if(readline(sockfd,recvline,MAXLINE) == 0){        cout << "recive over!";    }    fputs(recvline,stdout);}

The following is the development paradigm of the service-side program handling multiple customer requests;

Multi-process processing

For multiple customer requests, the server side uses fork to create a new process to handle;

Processing Flow:

    1. After the main process binds the IP port, use accept () to wait for the new client's request;
    2. Each new user request arrives, creating a new subprocess to process the specific customer request;
    3. The child process finishes processing the user request and ends the process;

Service-side pseudo-code:

listenFd = socket(AF_INET,SOCK_STREAM,0);bind(listenFd,addR);listen(listenFD);while(true){    //服务器端在这里阻塞等待新客户连接    connfd = accept(listenfd);     if( fork() ==0){//子进程        close(listenfd);        while(n=read(connfd,buf,MAXLINE)>0){            writen(connfd,buf);        }    }    close(connfd);}

This method is simple to develop, but for the operating system, the process is an expensive resource, for each new customer request to use a process processing, the cost is large;
This method is applicable for applications where the number of customer requests is not numerous;

Pre-allocated process pool, accept no lockout protection

In the previous method, each client creates a process processing request, and then releases it;
Continuous creation and end-of-process waste of system resources;
Use process pooling to pre-allocate processes and reuse of processes to reduce system consumption and time waits caused by process duplication;

Pros: Eliminate the overhead of new customer requests arriving to create processes;
Disadvantage: Pre-estimating the number of customer requests (determining the size of the process pool)

Systems originating from the Berkeley kernel have the following features:
All derived child processes call ACCEP () to listen for the same socket and go to sleep without a user request;
When a new customer request arrives, all the customers are awakened; The kernel chooses a process from which to process the request, and the remaining process goes to sleep again (back to the process pool);

This feature can be used by the operating system to control the allocation of processes;
The kernel scheduling algorithm will distribute each connection request evenly to each process;

Processing Flow:

    1. The main process pre-allocates the process pool, and all child processes are blocked on the accept () call;
    2. When a new user request arrives, the operating system wakes up all the blocking processes on the Accpet, selecting a connection from which to establish;
    3. The selected child process processes the user request, and the other child processes go back to sleep;
    4. The child process has been processed, and again blocked on the accept;

Service-side pseudo-code:

listenFd = socket(AF_INET,SOCK_STREAM,0);bind(listenFd,addR);listen(listenFD);for(int i = 0;i< children;i++){    if(fork() == 0){//子进程        while(true){            //所有子进程监听同一个套接字,等待用户请求            int connfd = accept(listenfd);            close(listenfd);            //连接建立后处理用户请求,完毕后关闭连接            while(n=read(connfd,buf,MAXLINE)>0){                writen(connfd,buf);            }            close(connfd);        }    }}

How do I remove a process from the process pool?
All processes are blocked by the accept () wait, and when the connection request arrives, the kernel chooses a process from all the waiting processes;

How do I put it back in the pool after the process?
After processing the client request, the subprocess waits for a new connection request through an infinite loop, blocking again on Accpet ();

Note: multiple process accept () blocking can produce a "swarm problem": Although only one process will get connected, all processes are awakened; This can cause performance damage when a connection is ready but too many processes are awakened;

Pre-allocated process pool, accept lock (file lock, thread locks)

The above unlocked implementations have problems with portability (only on kernel systems originating from Berkeley) and cluster problems,
The more general approach is to lock the accept, that is, to avoid multiple processes blocking on the Accpet call, but all blocking in the function of acquiring the lock;

Service-side pseudo-code:

listenFd = socket(AF_INET,SOCK_STREAM,0);bind(listenFd,addR);listen(listenFD);for(int i = 0;i< children;i++){    if(fork() == 0){        while(true){            my_lock_wait();//获取锁            int connfd = accept(listenfd);            my_lock_release();//释放锁            close(listenfd);            while(n=read(connfd,buf,MAXLINE)>0){                writen(connfd,buf);            }            close(connfd);        }    }}

Lock can use file lock, thread lock;

    • The way the file is locked can be ported to all operating systems, but it involves file system operation, which may be time consuming;
    • Thread locking is not only suitable for locking between different threads, but also for locking between different processes;

For details on the locking code, see "Network Programming" chapter 30th;

Pre-allocating process pools, passing descriptors;

Unlike each of the above processes receiving a listener request, the method is to uniformly receive the Accpet () user request in the parent process, passing the connection descriptor to the child process after the connection is established;

Processing Flow:

    1. The main process blocks waiting for a user request on Accpet, and all child processes continually poll for a descriptor that is available;
    2. A new user request arrives, and after the main process Accpet establishes a connection, a process is taken out of the process pool and the connection descriptor is passed to the child process through a byte stream pipeline;
    3. The child process receives the connection descriptor, processes the user request, sends a byte of content to the parent process after processing completes (meaningless), informs the parent process that my task has been completed;
    4. The parent process receives the single-byte data of the child process and puts the child process back into the process pool;

Service-side pseudo-code:

LISTENFD = socket (af_inet,sock_stream,0); bind (LISTENFD,ADDR); listen (LISTENFD);//Pre-establish child process pool for (int i = 0;i< children    ; i++) {//Use UNIX domain sockets to create a byte stream pipe used to pass descriptor Socketpair (AF_LOCAL,SOCK_STREAM,0,SOCKFD);        if (fork () = = 0) {//Pre-Create child process//sub-process byte stream to Parent process dup2 (Sockfd[1],stderr_fileno);        Close (LISTENFD);                 while (true) {//receives the connection descriptor if (READ_FD (STDERR_FILENO,&AMP;CONNFD) ==0) {;            Continue            } while (N=read (connfd,buf,maxline) >0) {//Process user request writen (CONNFD,BUF);            } close (CONNFD);        Notifies the parent process that the process has finished processing and can return to process pool write (Stderr_fileno, "", 1);    }}}while (True) {//Listen for listen socket descriptor and all child process descriptors Select (Maxfd+1,&rset,null,null,null); if (Fd_isset (listenfd,&rset) {//has a client connection request CONNFD = Accept (LISTENFD);//Receive Client connection//Find an idle child process from the process pool for (in        t i = 0; i < children;i++) {if (child_status[i] = = 0) break; } child_status[I] = 1;//child process is allocated from the process pool write_fd (CHILDFD[I],CONNFD);//The descriptor is passed to the child process close (CONNFD); }//Check the descriptor of the child process, there is data indicating that the child process request has been processed to complete, recycle to process pool for (int i = 0; i < children;i++) {if (Fd_isset (childfd[i],&rset            ) {if (read (Childfd[i]) >0) {child_status[i] = 0; }        }    }}
Multithreaded processing

Creating a thread for each user is much faster than creating a process for each user;

Processing Flow:

    1. The main thread blocks waiting on the accpet on the request;
    2. When a new user request is made, the main thread establishes a connection and then creates a new thread that passes the connection descriptor past;
    3. The thread ends when the child thread processes the user request;

Service-side pseudo-code:

listenFd = socket(AF_INET,SOCK_STREAM,0);bind(listenFd,addR);listen(listenFD);while(true){    connfd = accept(listenfd);        //连接建立后,创建新线程处理具体的用户请求    pthread_create(&tid,NULL,&do_function,(void*)connfd);    close(connfd);}--------------------//具体的用户请求处理函数(子线程主体)void * do_function(void * connfd){    pthread_detach(pthread_self());    while(n=read(connfd,buf,MAXLINE)>0){        writen(connfd,buf);    close((int)connfd);}
Pre-Create the thread pool, each of which has its own accept

Processing Flow:

    1. The main thread pre-creates the thread pool, the first created sub-thread acquires the lock, blocks on the Accept (), and the other child threads block the thread lock;
    2. A user request arrives, the first child thread establishes a connection, releases the lock, then processes the user request, enters the thread pool after completion, waits for a lock;
    3. After the first child thread releases the lock, a thread waiting in the thread pool has one that gets to the lock, blocking the Accept () waiting for the user to request;
listenFd = socket(AF_INET,SOCK_STREAM,0);bind(listenFd,addR);listen(listenFD);//预先创建线程池,将监听描述符传给每个新创建的线程for(int i = 0 ;i <threadnum;i++){    pthread_create(&tid[i],NULL,&thread_function,(void*)connfd);}--------------------//具体的用户请求处理//通过锁保证任何时刻只有一个线程阻塞在accept上等待新用户的到来;其它的线程都//在等锁;void * thread_function(void * connfd){    while(true){        pthread_mutex_lock(&mlock); // 线程上锁        connfd = accept(listenfd);        pthread_mutex_unlock(&mlock);//线程解锁        while(n=read(connfd,buf,MAXLINE)>0){            writen(connfd,buf);        close(connfd);    }}

When using a Unix system that originates from the Berkeley kernel, we do not have to lock the call to accept,
After removing the two steps of the lock, we find that the user's time is not locked (because the lock is done in the user space by the thread function), and the system time is much increased (every accept arrives, all the threads wake up, and the kernel cluster problem is raised, this is done in the thread kernel space);
And our threads need to be mutually exclusive, so that the kernel performs dispatch and does not allow itself to be locked out quickly;

There is no need to use file locking, because multiple threads in a single process can always achieve the same purpose through thread mutexes (slower file locks)

Pre-create thread pool, main thread accept after pass descriptor

Processing Flow:

    1. The main thread pre-creates the thread pool, and all threads in the thread pools are asleep by calling Pthread_cond_wait () (because of the guaranteed lock, which goes to sleep in turn, and does not occur while calling pthread_cond_wait to cause contention)
    2. The main thread block waits for a user request on the Acppet call;
    3. The user request arrives, the main thread Accpet establishes establishes, after the connection handle puts in the contract position, sends Pthread_cond_signal activates a waiting for that condition the thread;
    4. After the thread is activated, the connection handle is removed from the contract location to process the user request, and then go to sleep again (back to the thread pool);

There are two ways to activate conditional waits: pthread_cond_signal () activates a thread waiting for the condition, one is activated in the queued order when there are multiple waiting threads, and pthread_cond_broadcast () activates all waiting threads.

Note: The condition variables in general application need to be used together with the mutex lock;
Before calling Pthread_cond_wait (), the mutex must be Cheng (Pthread_mutex_lock ()), and before the update condition waits for the queue, the mutexes remain locked, and the thread hangs into the wait before unlocking. Before the condition satisfies thereby leaving pthread_cond_wait (), the mutex will be re-locked to correspond to the lock action before entering Pthread_cond_wait ().

Service-side pseudo-code:

LISTENFD = socket (af_inet,sock_stream,0); bind (LISTENFD,ADDR); listen (LISTENFD); for (int i = 0; I <threadnum;i++) {PTH Read_create (&tid[i],null,&thread_function, (void*) CONNFD);}    while (true) {CONNFD = accept (LISTENFD); Pthread_mutex_lock (&mlock);    Thread lock Childfd[iput] = connfd;//puts the handle of the descriptor into the array to the thread that gets to the lock; if (++iput = = Max_thread_num) iput= 0;    if (iput = = iget) err_quit ("Thread num not enuough!"); Pthread_cond_signal (&clifd_cond);//signals, wakes up a sleep thread (polling wakes one of them) Pthread_mutex_unlock (&mlock);//Line Threads unlocked}-------- ------------void * Thread_function (void * connfd) {while (true) {Pthread_mutex_lock (&mlock);//Line Cheng/ /When no connection handle is received, sleep on the condition variable, and release Mlock lock//Meet condition is awakened, re-add Mlock lock while (iget = = iput) pthread_cond_wait (&c        Lifd_cond,&mlock);        CONNFD = Childfd[iget];        if (++iget = = max_thread_num) iget = 0; Pthread_mutex_unlock (&mlock);//Line threads unlocked//process user request while (N=read (Connfd,buF,maxline) >0) {writen (CONNFD,BUF);    Close (CONNFD); }}

The test indicates that this version of the server is slower than the version of each thread's Accpet, because this version requires both mutex and condition variables, whereas the previous version only requires mutexes;

What is the difference between the delivery of a thread descriptor and the delivery of a process descriptor?
A descriptor opened in a process is visible to all threads in the process, and the reference count is 1;
All threads access this descriptor only needs to be accessed through the value of a descriptor (integer type);
And the process of the descriptor pass, passing is a reference to the descriptor, (like a file is opened by 2 processes, the corresponding file descriptor reference count increased by 2);

Summarize
    • When the system load is light, it is sufficient for each user to request a live derivation of a traditional concurrent server model served by a child process;
    • A pre-created sub-process pool or thread pool can reduce process control CPU time by more than 10 times times compared to the traditional one-time fork. Of course, the program will be more complex, need to monitor the number of sub-processes, as the number of customer users of the dynamic changes to increase or decrease the process pool;
    • It is usually simpler and faster to have all child processes or threads call the accept itself than to let the parent process or the main thread call the Accpet concurrency descriptor on its own, and pass it to the child process or thread.
    • Using threads is usually faster than using a process;
Resources

UNIX Network programming the first volume socket networking API

Posted by: Big CC | 05apr,2015
Blog: blog.me115.com [Subscribe]
Weibo: Sina Weibo

Client-Server Programming method

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.