Setsockopt () usage (detailed description of the number of workers)

Source: Internet
Author: User
Tags sendfile socket error

Int setsockopt (
Socket s,
Int level,
Int optname,
Const char * optval,
Int optlen
);

S (socket): point to an open set of interface descriptive words
Level: (level): Specifies the option code type.
Sol_socket: basic set of interfaces
Ipproto_ip: IPv4 Interface
Ipproto_ipv6: ipv6 Interface
Ipproto_tcp: TCP interface set
Optname (option name): Option name
Optval (option value): It is a pointer type pointing to a variable: integer, set interface structure, other structure types: Linger {}, timeval {}
Optlen (Option Length): optval size

Returned value: indicates that the binary option of a feature is enabled or disabled.
[/Code: 1: 59df4ce128]

========================================================== ======================================
Sol_socket
------------------------------------------------------------------------
So_broadcast agrees to send broadcast data int
Applicable to UDP socket. It means agreeing to UDP socket "broadcast" (broadcast) messages to the network.

So_debug agrees to debug int

So_dontroute

So_error get socket error int

So_keepalive
Check whether the host of the other party crashes to prevent the TCP connection from being blocked forever. After this option is set, TCP automatically sends a keepalive probe (keepalive probe) Shard to the other party if no data is exchanged in any direction of this interface within two hours ). This is a TCP segment that the other party must respond to. It will lead to the following three situations: the other party receives everything normally: the desired ack response. Two hours later, TCP sends out another probe shard. The other party has crashed and started again: respond with RST. The interface to be processed is set to econnreset, and the interface itself is closed. The other party has no response at all: the TCP originating from the Berkeley sends an additional eight probe shards in 75 seconds, trying to get a response. If no response is received after the first probe Shard is sent for 11 minutes and 15 seconds, give up. The processing error of the Set interface is set to etimeout by mistake, and the Set interface itself is disabled. For example, if the ICMP error is "Host Unreachable (host inaccessible)", it indicates that the host of the other party has not crashed, but is not reachable. In this case, the error to be handled is set to ehostunreach.

If so_dontlinger is true, the so_linger option is disabled.
So_linger delay disconnection struct linger
The preceding two options affect the close action.
Option interval close mode waiting for closing or not
So_dontlinger does not care about elegance or not
So_linger: Zero-force no
So_linger non-zero elegance is
If so_linger is set (that is, the l_onoff field in the linger structure is set to non-zero, refer to Section 2.4, 4.1.7 and 4.1.21), and the zero timeout interval is set, closesocket () run immediately without being congested, whether or not there are queued data not sent or not confirmed. This method is called "forced" or "invalid", because the virtual circuit of the Set interface is reset immediately, and the unsent data is lost. A wsaeconnreset error occurs when remote Recv () is called.
If so_linger is set and a non-zero time interval is determined, closesocket () calls the blocked process until all data is sent completely or times out. Such a closure is called an "elegant" closure. Note that if the set interface is set to non-congested and so_linger is set to a non-zero timeout value, closesocket () will be returned with the wsaewouldblock error.
If so_dontlinger is set on a stream class interface (that is to say, set the l_onoff field of the linger structure to zero; For details, see section 2.4, 4.1.7, 4.1.21), closesocket () is returned immediately. However, it is possible that the queued data will be sent before the set interface is closed. Note that in this case, the implementation of Windows interface sets will retain the interface sets and other resources for a period of uncertain time, this has some impact on applications that want to use the APIs.

So_oobinline puts out-of-band data into normal data streams and receives out-of-band data int in normal Data Streams

So_rcvbuf receive buffer size int
Set the retention size of the receiving buffer.
It has nothing to do with so_max_msg_size or TCP sliding form. If the packet sent is usually very large and frequently, use this option.

So_sndbuf sending buffer size int
Set the size of the sending Buffer
It has nothing to do with so_max_msg_size or TCP sliding form. If the packet sent is usually very large and frequently, use this option.
Each set of interfaces has a sending buffer and a receiving buffer. The receiving buffer is used by TCP and UDP to keep the received data for read by the application process. TCP: the size of the form at one end of the TCP announcement. The receiving buffer of the TCP interface cannot overflow because the other party does not agree to send data that exceeds the size of the advertised form. This is the traffic control of TCP. If the recipient ignores the form size and sends data that exceeds the Zhoukou size, the receiver TCP discards the data. UDP: When the received data report is not included in the interface to receive the buffer, the datagram is discarded. UDP has no traffic control. A fast sender can easily overwhelm slow recipients, causing the receiver to discard the UDP datagram.

So_rcvlowat lower limit int of the receiving buffer
So_sndlowat lower limit int of the sending Buffer
Each set of interfaces has a receiving low tide and a sending low tide. They are used by the function selectt, and the receiving low tide limit is to make the select return "readable" and the total amount of data required in the buffer zone received by the interface. -- For a TCP or UDP interface, the default value is 1. Sending low tide limit is to allow the SELECT statement to return "writable", and the available space is required in the interface sending buffer. For TCP interfaces, the default value is 2048. For the low-tide limit of UDP, because the number of bytes in the available space in the sending buffer is never changed, only the buffer size of the UDP interface sending must be greater than the low-tide limit of the Set interface, this UDP interface is always writable. UDP has no sending buffer, but only the size of the sending buffer.

So_rcvtimeo receiving timeout struct timeval
So_sndtimeo sending timeout struct timeval
So_reuseraddr agrees to reuse the local address and port int
Bind the used address (or port number), and be able to test the BIND man

So_exclusiveaddruse
The port is used in the exclusive mode, that is, it is not sufficient to use a port shared with other programs using so_reuseaddr.
When determining who is used by multiple bindings, it is based on the principle that the package is delivered to the user with no permissions, that is to say, users with low-level permissions can be rebound to high-level permissions such as the port of Service Startup, which is a major security risk,
If you do not want your program to be listened to, use this option.

So_type: Obtain the socket type Int.
So_bsdcompat is compatible with the BSD system int

========================================================== ========================================
Ipproto_ip
--------------------------------------------------------------------------
Ip_hdrincl includes the IP header int in the data packet
This option is often used by hackers to hide their IP addresses.

Ip_optinos IP header option int
Ip_tos service type
Ip_ttl time int

The following IPv4 Option is used for Multicast
IPv4 Option data type description
Ip_add_membership struct ip_mreq? To multicast group
Ip_rop_membership struct ip_mreq exit from multicast group
Ip_multicast_if struct ip_mreq specifies the interface for submitting multicast packets
Ip_multicast_ttl u_char specifies the TTL of the subscriber.
Ip_multicast_loop u_char makes the multicast loop valid or invalid
The ip_mreq structure is defined in the header file:
[Code: 1: 63724de67f]
Struct ip_mreq {
Struct in_addr imr_multiaddr;/* IP multicast address of Group */
Struct in_addr imr_interface;/* local IP address of interface */
};
[/Code: 1: 63724de67f]
If the process needs to be added? To a multicast group, use the setsockopt () function of soket to send this option. The option type is ip_mreq structure. Its first field imr_multiaddr specifies the address of the multicast group, and the second field imr_interface specifies the IPv4 address of the interface.
Ip_drop_membership
This option is used to exit a multicast group. The usage of the data structure ip_mreq is the same as above.
Ip_multicast_if
This option can change the network interface and define a new interface in the structure ip_mreq.
Ip_multicast_ttl
Set the TTL (TTL) of the packets in the multicast packets ). The default value is 1, indicating that data packets can only be transmitted in the local subnet.
Ip_multicast_loop
A member in a multicast group also receives the packet sent to the group. This option is used to select whether to activate the status.

Double reply: 21:21:52

Ippro_tcp
--------------------------------------------------------------------------
Tcp_maxseg maximum TCP Data Segment Size int
Obtain or set the maximum partition size (MSS) of the TCP connection ). The returned value is the maximum amount of data that our TCP sends to one end. It is often the MSS advertised by another end using syn, unless we select a value smaller than the MSS advertised by the other party for TCP. Assume that the value is obtained before the set interface connection, and the return value is the default value used when the MSS option is not received from the other end. Messages smaller than the returned value can be used for connections. Because timestamp is used by the consumer, it occupies 12 bytes of TCP option capacity in each shard. The maximum data size of each shard sent by TCP can also be changed during the connection period, provided that TCP supports the path MTU discovery function. If the path to the other party changes, this value can be adjusted up or down.
Tcp_nodelay does not use the Nagle algorithm int

Specify the connection time in seconds before TCP start-up and keep-alive probe. The default value must be at least 7200 seconds, that is, 2 hours. This option is valid only when the so_kepalivee set interface option is enabled.

Tcp_nodelay and tcp_cork,
Both options have key data for network connection behavior. Many Unix systems have implemented the tcp_nodelay option. However, tcp_cork is unique to Linux systems and is relatively new. It is first implemented on kernel version 2.4. In addition, other UNIX system versions have similar functions. It is worth noting that the tcp_nopush option on a BSD-derived system is actually part of the detailed implementation of tcp_cork.
Tcp_nodelay and tcp_cork basically control the "Nagle" of the package. The meaning of Nagle here is that the worker uses the Nagle algorithm to assemble a smaller package into a larger frame. John Nagle is the inventor of the Nagle algorithm. The latter is named by his name, he tried this solution for the first time in 1984 to solve the network congestion problem of Ford Motor Corporation (For details, refer to ietf rfc 896 ). The problem he solved is the so-called silly window syndrome, which is called the "stupid form syndrome" in Chinese. The detailed meaning is that every time a universal Terminal application generates a key operation, it will send a packet, in typical cases, a packet has a data load of one byte and a 40-byte long packet header, resulting in 4000% overload, which can easily cause network congestion ,. Nagle became a standard and was immediately implemented on the Internet. It has now become the default configuration, but in our opinion, it is also necessary to turn this option off in some cases.
Now let's assume that an application sends a request to send small pieces of data. We can choose to send data immediately or wait for a lot of other data to be generated and then send it again. If we send data immediately, our interactive and customer/server applications will greatly benefit. For example, when we are sending a short request and waiting for a large response, the associated overload will be lower than the total amount of data transmitted, and, if the request is sent immediately, the response time will be faster. The preceding operations can be completed by setting the tcp_nodelay option of the socket, so that the Nagle algorithm is disabled.
In the second case, we need to wait until the data volume reaches the maximum to send all the data through the network. This data transmission method deliberately applies to the communication performance of a large amount of data. A typical application is the file server. When Nagle algorithm is applied, problems may occur. However, if you are sending a large amount of data, you can set the tcp_cork option to disable Nagle, in the same way as tcp_nodelay (tcp_cork and tcp_nodelay are mutually exclusive ). Let's analyze the working principle in detail.
Assume that the application uses the sendfile () function to transfer a large amount of data. Application Protocols usually require sending certain information to pre-interpret the data, which is actually the header content. In typical cases, the header is very small and tcp_nodelay is set on the socket. A packet with a header will be transmitted immediately. In some cases (depending on the internal packet counter), the packet must be confirmed by the other party after it is successfully received by the other party. In this case, the transmission of a large amount of data will be postponed and unnecessary network traffic exchange will occur.
However, if we set the tcp_cork option on the socket (which is similar to inserting a "plug-in" on the pipeline), a packet with a header will fill in a large amount of data, all data is automatically transmitted through the package based on the size. When data transmission is complete, it is best to cancel the tcp_cork option to set the connection to "remove the plug" so that any part of the frames can be sent out. This is equally important for "congested" network connections.
All in all, if you can certainly send multiple data sets (such as the HTTP response header and body) together, we recommend that you set the tcp_cork option so that there is no latency between the data. It can greatly deliberately improve the performance of WWW, FTP, and file server, and simplify your work at the same time. The sample code is as follows:

Intfd, on = 1;
...
/* Create socket and other operations, which are omitted for space purposes */
...
Setsockopt (FD, sol_tcp, tcp_cork, & on, sizeof (on);/* cork */
Write (FD ,...);
Fprintf (FD ,...);
Sendfile (FD ,...);
Write (FD ,...);
Sendfile (FD ,...);
...
On = 0;
Setsockopt (FD, sol_tcp, tcp_cork, & on, sizeof (on);/* unplug the plug */

Unfortunately, many frequently used programs do not take the above issues into account. For example, Sendmail written by Eric Allman does not set any options for its socket.

Apache httpd is the most popular Web server on the Internet. All its sockets are configured with the tcp_nodelay option, and its performance is also favored by most users. Why? The answer lies in the difference between reality and reality. The TCP/IP protocol stack derived from BSD (FreeBSD is worth noting) has different operations under such circumstances. When a large number of small data blocks are submitted for transmission in tcp_nodelay mode, a large amount of information is sent by calling the write () function once. However, since the counter responsible for request delivery validation is byte-oriented rather than packet-oriented (on Linux), the probability of latency introduced is much reduced. The result is only related to the size of all data. Linux requires confirmation after the first package arrives, and FreeBSD will wait for several hundred packages before doing so.

In Linux, the effect of tcp_nodelay is very different from that expected by developers who are used to the BSD TCP/IP protocol stack, and the Apache performance on Linux is worse. Other applications that frequently use tcp_nodelay in Linux have the same problem.

Tcp_defer_accept

The first 1st options we will consider are tcp_defer_accept (this is the name of the Linux system, and some other operating systems also have the same options but use different names ). To understand the detailed idea of the tcp_defer_accept option, it is necessary to give a general description of the typical HTTP client/server interaction process. Recall how TCP establishes a connection with the data transmission target. On the network, the information transmitted between separated units is called an IP packet (or an IP datagram ). A packet always has a header containing service information, which is used for internal protocol processing and can carry data load. A typical example of service information is a set of so-called labels that represent the special meanings in the TCP/IP protocol stack of the table, such as the successful confirmation of packets received. Generally, it is entirely possible to carry the load in a tagged packet, but sometimes the internal logic forces the TCP/IP protocol stack to issue an IP packet with only a packet header. These packages often cause annoying network latency and are added? As a result, the overall network performance is reduced.
Now the server has created a socket waiting for connection at the same time. The TCP/IP connection process is called "three handshakes ". First, the customer program sends a TCP packet (one SYN Packet) that sets the SYN flag without data load ). Server sends a packet with SYN/ACK flag (a SYN/ACK packet) as the confirmation response of the packet received just now. The customer then sends an ACK package to confirm that 2nd packets are received, thus terminating the connection process. After receiving this SYN/ACK packet from the customer, the server will wake up a receiving process waiting for data to arrive. After three handshakes, the customer program starts to send "practical" data to the server. Generally, an HTTP request is very small and can be fully loaded into a package. However, in the above cases, at least four packages will be used for bidirectional transmission? Latency. In addition, you must note that the recipient has started waiting for information before the "practical" data is sent.
To mitigate the impact of these problems, Linux (and some other operating systems) includes the tcp_defer_accept option in its TCP implementation. They are set on the server that listens to the socket. the kernel of this option does not initialize the listening process until the last ACK packet is reached and the 1st packets with real data arrive. After a SYN/ACK packet is sent, the server waits for the client program to send an IP packet containing data. Today, only three packets need to be transmitted over the network, and the delay of connection establishment is significantly reduced, especially for HTTP Communication.
This option has a counterpart in many operating systems. For example, on FreeBSD, the same behavior can be implemented using the following code:

/* For clarity, skip irrelevant code here */
Struct accept_filter_arg AF = {"dataready ",""};
Setsockopt (S, sol_socket, so_acceptfilter, & AF, sizeof (AF ));
This feature is called "Accept filter" on FreeBSD and has multiple usage methods. However, in almost all cases, the effect is the same as that of tcp_defer_accept: the server does not wait for the last ACK packet but only for the packet carrying the data load. For more information about this option and Its Significance to high-performance Webserver, refer to the Apache documentation.
For HTTP client/server interaction, it is possible to change the behavior of the client program. Why does the customer program send such "useless" ACK packets? This is because the TCP stack cannot know the status of the ACK package. Assume that the producer uses FTP instead of HTTP, the client program will not send data until it receives the data packet prompted by the ftpserver. In this case, the delayed ack will delay the interaction between the customer and server. To determine whether Ack is necessary, the customer program must know the application protocol and its current status. In this way, it is necessary to modify the customer behavior.
For Linux client programs, we can also use token as an option, which is also called tcp_defer_accept. We know that sockets are divided into two types: Listener sockets and connection sockets, so they also have their respective sets of TCP options. Therefore, it is entirely possible that the two types of options used by the consumer have the same name at the same time. After this option is set on the connection socket, the customer no longer sends the ACK packet after receiving a SYN/ACK packet, but waits for the user program to send data requests. Therefore, the package sent by the server is also reduced.

Tcp_quickack

Another way to stop delay caused by sending useless packets is to use the tcp_quickack option. Unlike tcp_defer_accept, this option can be used not only to manage the connection establishment process, but also during normal data transmission. In addition, it can be set on either side of the client/server connection. If you know that the data is about to be sent soon, the delay in sending the ACK package will come in handy, and it is best to set the ACK flag on the data packet to minimize the network load. When the sender confirms that the data will be sent immediately (multiple packets), The tcp_quickack option can be set to 0. For sockets in the "connection" status, the default value of this option is 1. After the first use, the kernel will immediately reset this option to 1 (this is a one-time option ).
In some cases, it is very practical to issue an ACK package. The ack package will confirm the receipt of the data block, and the delay will not be introduced when the current data block is processed. This data transmission mode is quite typical for the interaction process, because in such cases, users' input time cannot be pre-written. In Linux, This is the default socket behavior.
In the above circumstances, the client program is sending an HTTP request to the server, but it knows that the request packet is very short in advance, so it should be sent immediately after the connection is established. This is a typical way of working with HTTP. Since there is no need to send a pure ack package, it is entirely possible to set tcp_quickack to 0 to improve performance. On the server side, both options can be set only once on the listening socket. All sockets, that is, sockets indirectly created by the accepted call, inherit all the options of the original socket.
Through the combination of tcp_cork, tcp_defer_accept, and tcp_quickack options, the number of packets that the consumer interacts with each HTTP will be reduced to a minimum acceptable level (based on TCP protocol requirements and security considerations ). The results not only result in faster data transmission and request processing speed, but also minimize the Client/Server bidirectional latency.

Ii. Example

1. closesocket (usually does not close immediately and goes through the time_wait process) to continue to reuse the socket:
Bool breuseaddr = true;
Setsockopt (S, sol_socket, so_reuseaddr, (const char *) & breuseaddr, sizeof (bool ));
2. Assume that soket that is already in the connection state is forced to close after closesocket is called.
Time_wait process:
Bool bdontlinger = false;
Setsockopt (S, sol_socket, so_dontlinger, (const char *) & bdontlinger, sizeof (bool ));
3. In the send () and Recv () processes, sometimes due to network conditions and other reasons, sending and receiving cannot be performed as expected, but set the sending and receiving time limit:
Int nnettimeout = 1000; // 1 second
// Sending time limit
Setsockopt (socket, sol_s0cket, so_sndtimeo, (char *) & nnettimeout, sizeof (INT ));
// Receiving time limit
Setsockopt (socket, sol_s0cket, so_rcvtimeo, (char *) & nnettimeout, sizeof (INT ));
4. When sending (), the returned bytes are actually sent (synchronized) or the bytes sent to the socket buffer.
(Asynchronous); by default, the system sends and receives data in 8688 bytes (about 8.5 KB) at a time.
And the received data volume is larger than the accept, and the socket buffer can be set to avoid the continuous loop sending and receiving of send () and Recv:
// Receiving buffer
Int nrecvbuf = 32*1024; // set it to 32 K
Setsockopt (S, sol_socket, so_rcvbuf, (const char *) & nrecvbuf, sizeof (INT ));
// Sending Buffer
Int nsendbuf = 32*1024; // set it to 32 K
Setsockopt (S, sol_socket, so_sndbuf, (const char *) & nsendbuf, sizeof (INT ));
5. Assume that you do not want to be affected by the copy from the system buffer to the socket buffer when sending data.
Program performance:
Int nzero = 0;
Setsockopt (socket, sol_s0cket, so_sndbuf, (char *) & nzero, sizeof (nzero ));
6. Complete the above functions in Recv () as above (the socket buffer content is copied to the System Buffer by default ):
Int nzero = 0;
Setsockopt (socket, sol_s0cket, so_rcvbuf, (char *) & nzero, sizeof (INT ));
7. Generally, when sending a UDP datagram, you want the data sent by the socket to have the broadcast feature:
Bool bbroadcast = true;
Setsockopt (S, sol_socket, so_broadcast, (const char *) & bbroadcast, sizeof (bool ));
8. When the client connects to the server, it is assumed that the socket in non-blocking mode can be used in the connect () process.
To set the connect () latency until accpet () is called (this function is set only to have a significant effect in the non-blocking process)
Function does not play a major role in blocked function calls)
Bool bconditionalaccept = true;
Setsockopt (S, sol_socket, so_conditional_accept, (const char *) & bconditionalaccept, sizeof (bool ));
9. Assume that closesocket () is called when sending data (sending () is not completed, and data is not sent ).
The general measure to take the callback is to "calmly close" Shutdown (S, sd_both), but the data is definitely lost. How to set the program to meet the requirements in detail?
Application requirements (that is, disable the socket after sending the unsent data )?
Struct linger {
U_short l_onoff;
U_short l_linger;
};
Linger m_slinger;
M_slinger.l_onoff = 1; // (it is called in closesocket (), but it is allowed to stay when data is not sent)
// Assume m_slinger.l_onoff = 0; then the function works the same as 2;
M_slinger.l_linger = 5; // (the allowable stay time is 5 seconds)
Setsockopt (S, sol_socket, so_linger, (const char *) & m_slinger, sizeof (linger); setsockopt () usage method

It comes from the Internet:

1. closesocket (usually does not close immediately and goes through the time_wait process) to continue to reuse the socket:
Bool breuseaddr = true;
Setsockopt (S, sol_socket, so_reuseaddr, (const char *) & breuseaddr, sizeof (bool ));

2. Assume that soket that is already in the connection state is forced to close after closesocket is called.
Time_wait process:
Bool bdontlinger = false;
Setsockopt (S, sol_socket, so_dontlinger, (const char *) & bdontlinger, sizeof (bool ));

3. In the send () and Recv () processes, sometimes due to network conditions and other reasons, sending and receiving cannot be performed as expected, but set the sending and receiving time limit:
Int nnettimeout = 1000; // 1 second
// Sending time limit
Setsockopt (socket, sol_s0cket, so_sndtimeo, (char *) & nnettimeout, sizeof (INT ));
// Receiving time limit
Setsockopt (socket, sol_s0cket, so_rcvtimeo, (char *) & nnettimeout, sizeof (INT ));

4. When sending (), the returned bytes are actually sent (synchronized) or the bytes sent to the socket buffer.
(Asynchronous); by default, the system sends and receives data in 8688 bytes (about 8.5 KB) at a time.
And the received data volume is larger than the accept, and the socket buffer can be set to avoid the continuous loop sending and receiving of send () and Recv:
// Receiving buffer
Int nrecvbuf = 32*1024; // set it to 32 K
Setsockopt (S, sol_socket, so_rcvbuf, (const char *) & nrecvbuf, sizeof (INT ));
// Sending Buffer
Int nsendbuf = 32*1024; // set it to 32 K
Setsockopt (S, sol_socket, so_sndbuf, (const char *) & nsendbuf, sizeof (INT ));


5. Assume that you do not want to be affected by the copy from the system buffer to the socket buffer when sending data.
Program performance:
Int nzero = 0;
Setsockopt (socket, sol_s0cket, so_sndbuf, (char *) & nzero, sizeof (nzero ));


6. Complete the above functions in Recv () as above (the socket buffer content is copied to the System Buffer by default ):
Int nzero = 0;
Setsockopt (socket, sol_s0cket, so_rcvbuf, (char *) & nzero, sizeof (INT ));


7. Generally, when sending a UDP datagram, you want the data sent by the socket to have the broadcast feature:
Bool bbroadcast = true;
Setsockopt (S, sol_socket, so_broadcast, (const char *) & bbroadcast, sizeof (bool ));


8. When the client connects to the server, it is assumed that the socket in non-blocking mode can be used in the connect () process.
To set the connect () latency until accpet () is called (this function is set only to have a significant effect in the non-blocking process)
Function does not play a major role in blocked function calls)
Bool bconditionalaccept = true;
Setsockopt (S, sol_socket, so_conditional_accept, (const char *) & bconditionalaccept, sizeof (bool ));


9. Assume that closesocket () is called when sending data (sending () is not completed, and data is not sent ).
The general measure to take the callback is to "calmly close" Shutdown (S, sd_both), but the data is definitely lost. How to set the program to meet the requirements in detail?
Application requirements (that is, disable the socket after sending the unsent data )?
Struct linger {
U_short l_onoff;
U_short l_linger;
};
Linger m_slinger;
M_slinger.l_onoff = 1; // (it is called in closesocket (), but it is allowed to stay when data is not sent)
// Assume m_slinger.l_onoff = 0; then the function works the same as 2;
M_slinger.l_linger = 5; // (the allowable stay time is 5 seconds)
Setsockopt (S, sol_socket, so_linger, (const char *) & m_slinger, sizeof (linger ));

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.