SetSockOpt () Usage method () parameter description

Source: Internet
Author: User
Tags sendfile socket error

int setsockopt (
SOCKET S,
int level,
int optname,
Const char* Optval,
int Optlen
);

S (socket):
Level: Specifies the type of the option code.


Sol_socket: Basic set of interfaces
Ipproto_ip:ipv4 Socket Connector
Ipproto_ipv6:ipv6 Socket Connector
IPPROTO_TCP:TCP Socket Connector
Optname (option name): Option name
Optval (option value): is a pointer type that points to a variable: shaping. The interface structure of the socket. Other structure type: linger{}, timeval{}
Optlen (option length): Size of Optval

Return value: Flag turns binary options on or off for a feature
[/code:1:59df4ce128]

========================================================================
Sol_socket
------------------------------------------------------------------------
So_broadcast agree to send broadcast data int
Applies to UDP sockets. The implication is to agree to UDP socket"broadcast" (broadcast) messages to the network.

So_debug agree to debug int

So_dontroute does not find route int

So_error Getting socket error int

So_keepalive remain connected int
Detects if the host computer crashes, preventing (server) from ever plugging in the input of the TCP connection. After setting this option, assuming that there is no data exchange in either direction within 2 hours of this set of interfaces, TCP will voluntarily send the other one to maintain the survival of the probe (KeepAlive probe).

This is a TCP sub-section that each other must respond to. It leads to the following three scenarios: the other receives everything normal: The ACK response in the expectation.

2 hours later. TCP will emit a sub-section with a probe. The other party has crashed and started again: respond with RST. The pending error of the socket is set to Econnreset, and the socket interface itself is closed.

The other party has no response whatsoever: TCP from Berkeley sends another 8 probe sections. 75 seconds apart, trying to get a response. Give up if there is no response after the first probe sub-section 11分钟15秒 is issued. The pending error of the socket is set to etimeout, and the socket interface itself is closed. If the ICMP error is "host unreachable", the other host does not crash, but is unreachable, in which case the pending error is set to Ehostunreach.

If the So_dontlinger is true, the So_linger option is disabled.
So_linger delay closing connection struct LINGER
The above two options affect the close behavior
Option interval off mode wait off or not
So_dontlinger don't care about elegance No
So_linger 0 Mandatory No
So_linger not 0 Elegance is
If So_linger is set (that is, the L_onoff field in the LINGER structure is not zero. See 2.4. 4.1.7 and 4.1.21 sections). and set the 0 timeout interval, the closesocket () is not blocked from running immediately. Whether or not queued data is not sent or not acknowledged.

Such a shutdown method is called "force" or "fail" shutdown, because the virtual circuit of the socket is immediately reset, and the loss of unsent data. The recv () call at the far end will fail with Wsaeconnreset.
If So_linger is set and a non-zero time-out interval is determined, closesocket () calls the blocking process until the remaining data is sent complete or timed out. Such closures are called "graceful" closures. Note that the closesocket () call will be returned with a wsaewouldblock error if the socket interface is set to non-clogging and so_linger to a non-0 timeout.


If So_dontlinger is set on a stream class socket interface (that is, the L_onoff domain of the linger structure is set to zero; see 2.4,4.1.7,4.1.21 section). The closesocket () call returns immediately. But suppose it is possible. Queued data is sent before the socket is closed. Please note that. In such a case, the Windows Socket implementation will retain the socket and other resources for an indeterminate period of time, which may have an impact on the application that would like to use the socket interface.

So_oobinline out-of-band data into normal data streams, receiving out-of-band data in normal data streams int

SO_RCVBUF Receive buffer size int
To set the retention size of the receive buffer
Regardless of the so_max_msg_size or TCP sliding form, assuming that the generally sent packets are very large and very frequent, use this option

SO_SNDBUF Send buffer size int
Set the retention size of the send buffer
is not related to so_max_msg_size or TCP sliding forms. Assuming that the packets sent typically are very large and very frequent, use this option
Each set of interfaces has a send buffer and a receive buffer.

The receive buffer is used by TCP and UDP to persist the received data until it is read by the application process.

The TCP:TCP notice also has a form size at one end. The TCP socket receive buffer cannot overflow because the other party does not agree to emit data that exceeds the size of the advertised form. This is the traffic control for TCP. Assume that the other person ignores the size of the form and sends out more than the size of the data. The receiver TCP discards it.

UDP: This datagram is discarded when the received datagram is not loaded into the socket receive buffer. UDP is no traffic control; Fast senders can overwhelm slow receivers very easily. The UDP drop datagram that caused the receiver.

So_rcvlowat receive buffer lower bound int
So_sndlowat send buffer lower bound int
Each set of interfaces has a receiving low tide limit and a send low ebb limit. They are used by the function SELECTT, and receive a low tide limit is the amount of data that must be in the receive buffer for select to return "readable" to the socket. --For a TCP or UDP socket interface. This value defaults to 1. Sending a low ebb limit is the amount of free space that is required to have the select return "writable" and in the socket send buffer. For TCP sockets, this value is usually 2048 by default. For UDP use low tide limit, because its send buffer in the free space of the number of bytes is never change, only to the UDP socket send buffer size is larger than the low tide limit of the socket interface, this UDP socket interface is always writable. UDP does not have a send buffer. There is only the size of the send buffer.

So_rcvtimeo Receive timeout struct timeval
So_sndtimeo Send timeout struct timeval
So_reuseraddr agree to reuse local address and Port int
To bind the used address (or port number), to be able to reference the bind man

So_exclusiveaddruse
Exclusive mode uses port, which is not allowed and other programs use SO_REUSEADDR shared using a port.
When determining who to use for multiple bindings. According to a principle is who specifies the most clear then the package to whom, and do not have permission points, that is, the low-level permissions of the user is able to re-bind in the advanced permissions such as the service started on the port, which is a significant security risk,
Suppose you don't want your program to be monitored. Then use this option

So_type Get socket type int
So_bsdcompat compatible with BSD systems int

==========================================================================
Ipproto_ip
--------------------------------------------------------------------------
Ip_hdrincl include IP header int in packet
This option is often used in hacking techniques. Hide your IP address

Ip_optinos IP Header option int
Ip_tos Service Type
Ip_ttl Time to live int

The following IPV4 options are used for multicast
IPV4 option Data type description
ip_add_membership struct Ip_mreq added to the multicast group
ip_rop_membership struct Ip_mreq exits from the multicast group
ip_multicast_if struct IP_MREQ Specifies the interface of the submission group broadcast text
Ip_multicast_ttl U_char Specifies the TTL of the submission group broadcast text
Ip_multicast_loop U_char make group broadcast text loop valid or invalid
The IP_MREQ structure is defined in the header file:
[code:1:63724de67f]
struct Ip_mreq {
struct IN_ADDR imr_multiaddr; /* IP multicast address of Group */
struct IN_ADDR imr_interface; /* Local IP Address of interface */
};
[/code:1:63724de67f]
If the process is to be added to a multicast group. Use the Soket setsockopt () function to send this option. The option type is the IP_MREQ structure, its first field imr_multiaddr specifies the address of the multicast group, and the second field imr_interface specifies the IPV4 address of the interface.
Ip_drop_membership
This option is used to exit from a multicast group.

The use of data structure ip_mreq is the same as above.


Ip_multicast_if
This option allows you to modify the network interface and define the new interface in the structure ip_mreq.


Ip_multicast_ttl
Sets the TTL (Time to live) of packets for the group broadcast. The default value is 1. Indicates that the packet can only be delivered in a local subnet.
Ip_multicast_loop
The members of the multicast group themselves will also receive the messages it sends to this group. This option is used to select whether to activate such a state.

Matchless reply at: 2003-05-08 21:21:52

Ippro_tcp
--------------------------------------------------------------------------
Tcp_maxseg the size of the TCP maximum data segment int
Gets or sets the maximum section size (MSS) for a TCP connection.

The return value is the maximum amount of data that our TCP sends to the other end, and it is often the MSS that is advertised with a SYN section on one end, unless our TCP chooses to use a smaller value than the MSS advertises.

Assuming this value is obtained before the socket is connected, the return value is the default value that is used in cases where the MSS option is not received from the other-end. A letter smaller than this return value may actually be used on the connection, because, for example, with a timestamp option, it consumes 12 bytes of TCP option capacity on each section.

The maximum amount of data that our TCP will send for each sub-section can also change over the lifetime of the connection, but only if TCP supports the Path MTU Discovery feature. Suppose that the path of the other side has changed. This value can be adjusted up or down.


Tcp_nodelay does not use the Nagle algorithm int

Specifies that the TCP start-up is kept alive to detect the connection spare time in seconds before the sub-section.

The default value must be at least 7,200 seconds, or 2 hours. This option is only valid if the SO_KEPALIVEE socket option is turned on.

Tcp_nodelay and Tcp_cork,
Both of these options have critical data for the behavior of the network connection. Many UNIX systems implement the Tcp_nodelay option. However, Tcp_cork is unique to the Linux system and relatively new; it is first implemented on the kernel version number 2.4. In addition Other Unix system version numbers also feature similar options, notably. The Tcp_nopush option on some kind of BSD-derived system is actually a part of the tcp_cork implementation in detail.


Tcp_nodelay and Tcp_cork basically control the "Nagle" of the package, which is what Nagle means by using the Nagle algorithm to assemble smaller packages into larger frames. John Nagle is the inventor of the Nagle algorithm. The latter is named after his name, which he first used in 1984 to try to solve the network congestion problem at Ford Motor Company (see IETF RFC 896 for more details).

The problem he solves is the so-called silly window syndrome, called the "stupid Forms syndrome" in Chinese. In detail, because a universal terminal application sends a packet every time a keystroke is generated, a packet typically has a byte of data payload and a 40-byte header, resulting in a 4,000% overload. It is very easy to make the network congestion. Nagle later became a standard and was immediately implemented on the Internet.

It has now become the default configuration. But in our opinion. It is also desirable to turn off this option on some occasions.
Now let's assume that an application makes a request to send a small chunk of data.

We can choose to send the data right away or wait to generate a lot of other data and then send the two strategies again.

Let's say we send the data right away. Then the interactive and customer-/server applications will greatly benefit. Like what. When we are sending a shorter request and waiting for a larger response, the associated overload is less than the amount of data transferred. And, let's say that the response time will be faster if the request is made immediately. The above can be done by setting the socket's tcp_nodelay option, which disables the Nagle algorithm.
The second situation requires us to wait until the maximum amount of data is reached before sending all data over the network at once. This way of transmitting data deliberately to a large number of data communication performance, the typical application is the file server.

Applying the Nagle algorithm creates a problem in this case. However, assuming that you are sending large amounts of data, you can set the Tcp_cork option to disable Nagle, in a way that is exactly the opposite of Tcp_nodelay (Tcp_cork and Tcp_nodelay are mutually exclusive). Let us carefully analyze the working principle below.


Suppose the application uses the Sendfile () function to transfer large amounts of data. Application protocols typically require the sending of certain information to pre-interpret the data. This information is in fact the header content. Typically, the header is very small and tcp_nodelay is set on the socket. Packets with headers will be transferred immediately. In some cases (depending on the internal packet counter), the package will need to be confirmed after it has been successfully received by the other party. Such The transfer of large amounts of data will be deferred and unnecessary network traffic exchange is generated.


But. Suppose we set a tcp_cork on a socket (which can be likened to inserting a "plug" on a pipe), a packet with a header fills a large amount of data. All data is transmitted through the packet on its own initiative, based on size. When the transfer data is complete, it is best to cancel the tcp_cork option to the connection "unplug the plug" so that any part of the frame can be sent out. This is as important as "plug-in" network connectivity.


In a nutshell, assuming you can send multiple data sets together (such as the header and body of the HTTP response), we recommend that you set the tcp_cork option so that there is no delay between the data. The performance of WWW, FTP, and file server is greatly intentional, simplifying your work at the same time. The demo sample code is as follows:

INTFD, on = 1;
...
/* Here is an operation such as creating a socket, omitted for the sake of space */
...
SetSockOpt (FD, SOL_TCP, tcp_cork, &on, sizeof (on)); /* Cork */
Write (fd, ...);
fprintf (fd, ...);
Sendfile (fd, ...);
Write (fd, ...);
Sendfile (fd, ...);
...
on = 0;
SetSockOpt (FD, SOL_TCP, tcp_cork, &on, sizeof (on)); /* Unplug the stopper */

Unfortunately, many of the programs that are used frequently do not take into account the above problems. Like what. The SendMail written by Eric Allman does not have any options for its socket settings.

Apache HTTPD is the most popular webserver on the internet, and all of its sockets are set with Tcp_nodelay options. And its performance is well received by most users. What is this for? The answer lies in the difference between implementation. The BSD-derived TCP/IP protocol stack (notably FreeBSD) operates differently in such a situation. When a large number of small data block transmissions are committed in Tcp_nodelay mode. A large amount of information is sent out in the same way that a write () function call sends a piece of data. However, because the registers responsible for requesting delivery confirmations are byte-oriented rather than packet-oriented (on Linux), the probability of introducing delays is much reduced. The result is only related to the size of all the data.

Linux requires confirmation after the first packet arrives. FreeBSD will wait for hundreds of packages before doing so.

On Linux systems, the effects of tcp_nodelay are very different from those expected by developers accustomed to the BSD TCP/IP stack, and Apache performance on Linux will be even worse. Other applications that frequently use Tcp_nodelay on Linux have the same problem.

Tcp_defer_accept

The 1th option we'll consider first is tcp_defer_accept (this is the name on the Linux system, and some of the other operating systems have the same options but use different names). To understand the detailed idea of the tcp_defer_accept option, it is necessary to outline a typical HTTP client/server interaction process.

Please recall how TCP is connected to the destination of the data transfer.

On the network. The information that is transferred between detached units is called an IP packet (or IP datagram). A package always has a header with service information, and Baotou is used for internal protocol processing. And it can carry data loads as well.

The typical example of service information is a set of so-called flags that represent the special meaning of the TCP/IP stack, such as the success of receiving packets, and so on.

Usually. It is entirely possible to carry the payload in a "tagged" package, but sometimes internal logic forces the TCP/IP stack to emit only the packet of packets with the packet header.

These packages often cause nasty network delays and add a load to the system. As a result, network performance is reduced in general.


Now the server creates a socket and waits for a connection. The TCP/IP connection process is the so-called "3-time handshake". First, the client program sends a TCP packet that sets the SYN flag and does not have a data payload (a SYN packet). The server then emits a packet with the SYN/ACK flag (a syn/ack packet) as the acknowledgment response to the packet just received.

The customer then sends an ACK packet confirming the receipt of the 2nd package thereby ending the connection process. After receiving this Syn/ack package from the customer. Server wakes up a receive process to wait for data to arrive. When the handshake is finished 3 times. The client program starts by sending "useful" data to the server. Usually. The amount of an HTTP request is very small and can be fully loaded into a package. However, in the above case, at least 4 packets will be used for two-way transmission, which adds considerable delay time.

Besides, you have to notice. Before the "useful" data is sent. The receiver has started waiting for information.


To mitigate the impact of these problems. Linux (and some other operating systems) includes the TCP_DEFER_ACCEPT option in its TCP implementation. They are set on the server side of the listening socket. This option commands the kernel not to wait for the final ACK packet and to initialize the listening process until the 1th packet arrives with a real data.

After the Syn/ack package is sent, the server waits for the client program to send the IP packets with the data. Today, only 3 packets are sent over the network. It also significantly reduces the latency of connection establishment. This is especially true for HTTP traffic.


This option has a corresponding counterpart on several operating systems.

For example, on FreeBSD, the same behavior can be implemented with the following code:

/* For clarity, irrelevant code is omitted here */
struct Accept_filter_arg af = {"Dataready", ""};
SetSockOpt (S, Sol_socket, So_acceptfilter, &AF, sizeof (AF));
This feature is called "Accept filter" on FreeBSD and has a variety of usage methods.

Only, in almost all cases, the effect is the same as tcp_defer_accept: The server waits for only the packets that carry the data payload without waiting for the final ACK packet.

To learn more about this option and its importance to high-performance webserver, refer to the Apache documentation for more information.


In terms of HTTP client/server interactions. There may be a need to change the behavior of the client program.

Why does the client program send such "useless" ACK packets? This is because the TCP stack cannot know the state of the ACK packet. Assume that you are using FTP rather than HTTP. The client program then sends the data until it receives a packet of ftpserver prompts. In such a case, the delayed ACK causes a delay in the client/server interaction. To determine if an ACK is necessary, the client program must know the application protocol and its current state. This makes it necessary to change the behavior of the customer.
For Linux client programs. We also have an option to use. It is also called tcp_defer_accept. We know that sockets are divided into two types, listening sockets and connecting sockets, so they each have a corresponding set of TCP options. It is therefore entirely possible to have the same name for both types of options that are used at the same time. When this option is set on a connection socket, the customer no longer sends an ACK packet after receiving a syn/ack packet, but waits for the next data request sent by the user program, so the packet sent by the server is correspondingly reduced.

Tcp_quickack

Another way to prevent delays caused by sending unwanted packets is to use the Tcp_quickack option.

This option is different from tcp_defer_accept. It can be used not only as a management connection building process but also during normal data transfer processes. In addition, it can be set up on either side of the client/server connection. If you know that the data is about to be sent soon, the delay in sending the ACK packet will come in handy. It is also best to set the ACK flag on the packet that carries the data to minimize Network load. The Tcp_quickack option can be set to 0 when the sender is sure that the data will be sent immediately (multiple packages). The default value for this option is 1 for sockets that are in the connected state. The kernel will reset this option to 1 immediately after first use (this is a one-time option).
In some cases, it is very practical to issue ACK packets. The ACK packet confirms the receipt of the data block, and the next piece is processed without introducing a delay. Such a transfer data pattern is quite typical for the interaction process, because the user's input time cannot be predicted in such cases. On a Linux system this is the default socket behavior.
In the above scenario, the client sends an HTTP request to the server. In advance, the request packet is very short so it should be sent immediately after the connection is established, which can be described as the typical way of working with HTTP. Since there is no need to send a pure ACK packet. So it is entirely possible to set Tcp_quickack to 0 to improve performance.

On the server side. Both of these options can be set only once on a listening socket. All sockets, that is, the sockets created indirectly by the accepted call, inherit all the options of the original socket.
With the combination of the Tcp_cork, Tcp_defer_accept, and tcp_quickack options, the number of packets participating in each HTTP interaction will be reduced to a minimum acceptable level (based on the requirements of the TCP protocol and security considerations).

The result is not only faster transfer data and request processing speed, but also a minimization of/server bidirectional latency for the customer.

Second, the use of sample description

1.closesocket (typically does not close immediately and undergoes the time_wait process) to continue to reuse the socket:
BOOL breuseaddr=true;
SetSockOpt (S,sol_socket, SO_REUSEADDR, (const char*) &breuseaddr,sizeof (BOOL));
2. Assume that a soket that is already in the connected state is forced to close after calling Closesocket and does not experience
The process of time_wait:
BOOL Bdontlinger = FALSE;
SetSockOpt (S,sol_socket,so_dontlinger, (const char*) &bdontlinger,sizeof (BOOL));
3. In the Send (), recv () process sometimes due to network conditions and other reasons, the collection can not be expected to proceed, and set the time and delivery period:
int NNETTIMEOUT=1000;//1 sec
Delivery time limit
setsockopt (socket. Sol_s0cket,so_sndtimeo. (char *) &nnettimeout,sizeof (int));
Receiving time limit
SetSockOpt (Socket,sol_s0cket,so_rcvtimeo, (char *) &nnettimeout,sizeof (int));
4. At the time of Send (). Returns the bytes that were actually sent out (synchronous) or sent to the socket buffer
(asynchronous); The system default state send and Receive is 8688 bytes (about 8.5K) at a time. Send data in the actual process
And the amount of data received is relatively large. The ability to set the socket buffer, avoiding send (), recv () is constantly circulating and sending and receiving:
Receive buffers
int nrecvbuf=32*1024;//set to 32K
SetSockOpt (S,sol_socket,so_rcvbuf, (const char*) &nrecvbuf,sizeof (int));
Send buffer
int nsendbuf=32*1024;//set to 32K
SetSockOpt (S,sol_socket,so_sndbuf, (const char*) &nsendbuf,sizeof (int));
5. Assume that when sending data, you want to not experience a copy of the system buffer to the socket buffer
Performance of the program:
int nzero=0;
SetSockOpt (Socket,sol_s0cket,so_sndbuf, (char *) &nzero,sizeof (Nzero));
6. Above in recv () complete the above function (by default, the contents of the socket buffer are copied to the system buffer):
int nzero=0;
SetSockOpt (Socket,sol_s0cket,so_rcvbuf, (char *) &nzero,sizeof (int));
7. In general, when sending a UDP datagram, you want the data sent by the socket to have broadcast characteristics:
BOOL bbroadcast=true;
SetSockOpt (S,sol_socket,so_broadcast, (const char*) &bbroadcast,sizeof (BOOL));
8. In the Client connection server process, it is assumed that the socket in non-clogging mode is available in the process of connect ()
To set the Connect () delay until Accpet () is called (this function is set only in the non-clogging process with significant
Role. Not very useful in blocked function calls)
BOOL bconditionalaccept=true;
SetSockOpt (s,sol_socket,so_conditional_accept, (const char*) &bconditionalaccept,sizeof (BOOL));
9. Assume that during the sending of the data (send () is not complete. Data not sent) and called closesocket (), we
Generally taken measures are "calmly closed" shutdown (S,sd_both), but the data is definitely lost. How to set the program to meet the detailed
Application requirements (that is, to let the data not sent out after the socket is closed)?
struct Linger {
U_short L_onoff;
U_short L_linger;
};
Linger M_slinger;
m_slinger.l_onoff=1;//(in Closesocket () call, but there is no data sent when the time allowed to stay)
assume m_slinger.l_onoff=0; then function and 2.) function likewise;
m_slinger.l_linger=5;//(allow 5 seconds to stay)
SetSockOpt (S,sol_socket,so_linger, (const char*) &m_slinger,sizeof (LINGER)); setsockopt () How to use 2007/12/05 19:01

A little bit from the Internet:

1.closesocket (typically does not close immediately and undergoes the time_wait process) to continue to reuse the socket:
BOOL breuseaddr=true;
SetSockOpt (S,sol_socket, SO_REUSEADDR, (const char*) &breuseaddr,sizeof (BOOL));

2. Assume that a soket that is already in the connected state is forced to close after calling Closesocket and does not experience
The process of time_wait:
BOOL Bdontlinger = FALSE;
SetSockOpt (S,sol_socket,so_dontlinger, (const char*) &bdontlinger,sizeof (BOOL));

3. In the Send (), recv () process sometimes due to network conditions and other reasons. The collection cannot be expected and the time frame for sending and receiving is set:
int NNETTIMEOUT=1000;//1 sec
Delivery time limit
SetSockOpt (Socket,sol_s0cket,so_sndtimeo. (char *) &nnettimeout,sizeof (int));
Receiving time limit
SetSockOpt (Socket,sol_s0cket,so_rcvtimeo. (char *) &nnettimeout,sizeof (int));

4. In Send (), the actual bytes sent (synchronous) or bytes sent to the socket buffer are returned.
(asynchronous); The system default state send and Receive is 8688 bytes (approximately 8.5K), and data is sent in the actual process.
and receive a large amount of data, the ability to set the socket buffer, and avoid the Send (), recv () Continuous loop transceiver:
Receive buffers
int nrecvbuf=32*1024;//set to 32K
SetSockOpt (S,sol_socket,so_rcvbuf, (const char*) &nrecvbuf,sizeof (int));
Send buffer
int nsendbuf=32*1024;//set to 32K
SetSockOpt (S,sol_socket,so_sndbuf, (const char*) &nsendbuf,sizeof (int));


5. Assume that when sending data, you want to not experience a copy of the system buffer to the socket buffer
Performance of the program:
int nzero=0;
SetSockOpt (Socket,sol_s0cket,so_sndbuf, (char *) &nzero,sizeof (Nzero));


6. Above in recv () complete the above function (by default, the contents of the socket buffer are copied to the system buffer):
int nzero=0;
SetSockOpt (Socket,sol_s0cket,so_rcvbuf, (char *) &nzero,sizeof (int));


7. Generally when sending a UDP datagram. You want the data sent by the socket to have broadcast characteristics:
BOOL bbroadcast=true;
SetSockOpt (S,sol_socket,so_broadcast, (const char*) &bbroadcast,sizeof (BOOL));


8. In the Client connection server process, it is assumed that the socket in non-clogging mode is available in the process of connect ()
To set the Connect () delay until Accpet () is called (this function is set only in the non-clogging process with significant
Role. Not very useful in blocked function calls)
BOOL bconditionalaccept=true;
SetSockOpt (s,sol_socket,so_conditional_accept, (const char*) &bconditionalaccept,sizeof (BOOL));


9. Assume that when the data is sent (send () is not complete, and the data is not sent) and called closesocket (), once we
Generally take the measure is "calmly close" shutdown (S,sd_both), but the data is definitely lost, how to set the program to meet the detailed
Application requirements (that is, to let the data not sent out after the socket is closed)?
struct Linger {
U_short L_onoff;
U_short L_linger;
};
Linger M_slinger;
m_slinger.l_onoff=1;//(in Closesocket () call, but there is no data sent when the time allowed to stay)
assume m_slinger.l_onoff=0; then function and 2.) function likewise;
m_slinger.l_linger=5;//(Time allowed to stay 5 second)
SetSockOpt (S,sol_socket,so_linger, (const char*) &m_slinger,sizeof (LINGER));

SetSockOpt () Usage method () parameter description

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.