Linux Network Programming-select/epoll know that the socket has data readable. How can I determine whether all the data has been read?

Source: Internet
Author: User
Note: Only when epoll et (edge trigger) mode is used, you need to check whether the data has been read. When the select or epoll mode is used, you do not have to worry about whether the data has been read. If select/epoll detects that the data is readable, it is OK.

 

There are two methods:

 

1. For TCP, call the Recv method. Based on the return value of the Recv method, if the returned value is smaller than the size of the Recv buffer we specify, all data has been received. In Linux epoll manual, there are similar descriptions:

 

For stream-oriented files (e.g ., pipe, FIFO, stream socket ), the condition that the read/write I/O space is exhausted can also be detected by checking the amount of data read from/written to the target file descriptor. for example, if you call read (2) by asking to read a certain amount of data and read (2) returns a lower number of bytes, you can be sure of having exhausted the read I/O space for the file descriptor. the same is true when writing using write (2 ). (avoid this latter technique if you cannot guarantee that the monitored file descriptor always refers to a stream-oriented file .)

 

2. Both TCP and UDP are applicable. Set the socket to nonblock (using the fcntl function), select the socket to read, and use read/Recv to read data. When the function returns-1 and errno is eagain or ewouldblock, all data has been read.

 

Experiment conclusion:

 

The first method is incorrect. Simply put, if 4 K bytes are sent and a 2 K buffer is used for Recv, there will be no data Recv after two Recv operations, and the Recv will be block. There will never be a case where the Recv return value is less than 2 k (Note: if the Recv/read returns 0, the peer socket is closed ).

 

Therefore, the second method is recommended. The second method is correct and works for both TCP and UDP. In fact, no matter what platform is writing network programs, I think select + nonblock socket should be used. This ensures that your program will not block at least the Recv/send/accept/connect operations to stop the entire network service. The bad thing is that it is not conducive to debug. If it is block socket, GDB will be able to know where the blocking is...

 

In fact, the so-called read completion means that all the data in the input data queue corresponding to the socket in the kernel is read, so that the socket is set to unreadable in the kernel. Therefore, if the sender continuously sends data within the LAN, we can read the data continuously after the SELECT statement is read by the Recv socket. Therefore, if the receiving end of a network program wants to receive all the data at a time and store all the received data in the memory, this situation should be taken into consideration to avoid occupying too much memory.

 

The following is the test code. After the client reads 4 K, it exits because the sender sends 4 K each time. Therefore, after the client selects a readable, it only reads 4 K.

Client. C:

# Include <stdio. h>
# Include <stdlib. h>
# Include <errno. h>
# Include <string. h>
# Include <netdb. h>
# Include <sys/types. h>
# Include <netinet/in. h>
# Include <sys/socket. h>
# Include <fcntl. h>
# Include <unistd. h>
# Include <sys/select. h>

# Define servport 3333
# Define recv_buf_size 1024

Void setnonblocking (INT sock)
{
Int opts;
Opts = fcntl (sock, f_getfl );
If (OPTs <0)
{
Perror ("fcntl (sock, getfl )");
Exit (1 );
}
Opts = opts | o_nonblock;
If (fcntl (sock, f_setfl, opts) <0)
{
Perror ("fcntl (sock, setfl, opts )");
Exit (1 );
}
}

Int main (INT argc, char * argv [])
{
Int sockfd, iresult;
Char Buf [recv_buf_size];
Struct sockaddr_in serv_addr;
Fd_set readset, testset;

Sockfd = socket (af_inet, sock_stream, 0 );
Setnonblocking (sockfd );

Memset (& serv_addr, 0, sizeof (serv_addr ));
Serv_addr.sin_family = af_inet;
Serv_addr.sin_port = htons (servport );
Serv_addr.sin_addr.s_addr = inet_addr ("127.0.0.1 ");

Connect (sockfd, (struct sockaddr *) & serv_addr, sizeof (serv_addr ));

Fd_zero (& readset );
Fd_set (sockfd, & readset );

Testset = readset;
Iresult = select (sockfd + 1, & testset, null );

While (1 ){
Iresult = Recv (sockfd, Buf, recv_buf_size, 0 );
If (iresult =-1 ){
If (errno = eagain | errno = ewouldblock ){
Printf ("Recv finish detected, quit... \ n ");
Break;
}
}
Printf ("received % d bytes \ n", iresult );
}

Printf ("final iresult: % d \ n", iresult );
Return 0;
}

 

 

Server. C:

# Include <stdio. h>
# Include <stdlib. h>
# Include <errno. h>
# Include <string. h>
# Include <sys/types. h>
# Include <netinet/in. h>
# Include <sys/socket. h>
# Include <sys/Wait. H>

# Define servport 3333
# Define backlog 10
# Define send_buf_size 4096

Int main (INT argc, char * argv [])
{
Int sockfd, client_fd, I;
Struct sockaddr_in my_addr;
Char * buffer = NULL;

Sockfd = socket (af_inet, sock_stream, 0 );
Memset (& my_addr, 0, sizeof (my_addr ));
My_addr.sin_family = af_inet;
My_addr.sin_port = htons (servport );
My_addr.sin_addr.s_addr = inet_addr ("127.0.0.1 ");

BIND (sockfd, (struct sockaddr *) & my_addr, sizeof (struct sockaddr ));
Listen (sockfd, backlog );

Client_fd = accept (sockfd, null, null );

Buffer = malloc (send_buf_size );

For (I = 0; I <100; I ++ ){
Send (client_fd, buffer, send_buf_size, 0 );
Sleep (1 );
}

Sleep (10 );
Close (client_fd );
Close (sockfd );
Free (buffer );
Return 0;
}

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.