inch The topic describing TCP ' s Maximum Segment Size (MSS) parameter , I explained the trade-off in determining the optimal size of TCP segments. If segments is too large, we risk have them become fragmented at the IP level. Too small, and we get greatly reduced performance because we is sending a small amount of data in a segment with at least bytes of header overhead. We also use up valuable processing time, which is required to handle, each of these small segments.
The MSS parameter ensures that we don ' t send segments that's too large-tcp is not allowed to create a segment larger than the MSS Unfortunately, the basic sliding windows mechanism doesn ' t provide any minimum size of segment tha T can be transmitted. In fact, isn't only was itpossible for a device to send very small, inefficient segments, the simplest implementatio N of flow control using unrestricted window size adjustmentsensures that under conditions of heavy load, window s Ize'll become small, leading to significant performance reduction!
How
Silly Window syndrome occurs
to see how the this can happen, let's consider an example that's a VA Riation on the one we've been using so far in this section. We'll assume the MSS is a client/server pair where again, the server's initial receive window is set to this same Value, 360. This means the client can send a "full-sized" segment to the server. As long as the server can keep removing the data from the buffer as fast as the client sends it, we should has no problem . (In reality the buffer size would normally is larger than the MSS.)
Now , imagine this instead, the server is bogged down for whatever reason while the client needs to send it a great DEA L of data. For simplicity, let's say that the server was only able to remove 1 bytes of data from the buffer for every 3 it receives. Let's say it also removes additional bytes from the buffer during the time it takes for the next client's segment to AR Rive. Here's what'll happen:
- the client ' s Send window is a, and it has lots of data to send. It immediately sends a, byte segment to the server. This uses up its entire send window.
- When the server gets this segment it acknowledges it. However, it can only remove bytes so the server reduces the window size from 120. It sends this in the Window field of the acknowledgment.
- the client receives an acknowledgment of bytes, and sees the window size have been reduced to 120. It wants to send their data as soon as possible, so it sends off a + byte segment.
- The server has removed + bytes from the buffer by the time the 120-byte segment arrives. The buffer thus contains bytes (from the first segment and less the removed). The server is able to immediately process one-third of those-bytes, or bytes. This means bytes is added to the ' already remain in the ' buffer, so 280 bytes is used up. The server must reduce the window size to bytes.
- The client would see this reduced window size and send a 80-byte segment.
- The server started with 280 bytes and removed to yield, bytes left. It receives bytes from the client, removes one third, so-Z is added to the buffer, which becomes 293 bytes. It reduces the window size to bytes (360-293).
This process, which are illustrated in Figure 228 , would continue for many rounds, with the window size getting smaller and smaller, especially if the server gets even More overloaded. Its rate of clearing the buffer is decrease even more, and the windows may close entirely.
Figure 228:tcp "Silly window syndrome"
This diagram shows one example of the phenomenon known as tcp Silly window syndrome can Ari Se. The client is trying to send data as fast as possible to the server, which are very busy and cannot clear its buffers Promp tly. Each time the client sends data, the server reduces its receive window. The size of the messages the client sends shrinks until it is only sending very small, inefficient segments.
(Note, this diagram I has shown the server ' s buffer fixed in position, rather than sliding to the right As in the other diagrams in this section. This "can see" the receive window decreasing in size more easily.
Let's suppose this happens. Now, eventually, the server would remove some of the data from this buffer. Let's say it removes bytes by the time the first Closed-window "probe" from the client arrives. The server then reopens the window to a size of bytes. The client is still desperate to send data as fast as possible, so it generates a 40-byte segment. And so it goes, with likely all the remaining data passing from the client to the server in tiny segments until either the Client runs out of data, or the server more quickly clears the buffer.
Now imagine the worst-case scenario. This time, it's the application process on the server, which is overloaded. It is drawing data from the buffer one, byte at a time. Every time it removes a byte from the server ' s buffer, the server's TCP opens the window with a window size of exactly 1 a ND puts this on theWindow field in a acknowledgment to the client. The client then sends a segment with exactly one byte, refilling the buffer until the application draws off the next byte.
The cause of Silly window syndrome:inefficient Reductions of window Size
None of the We have seen above represents a failure per se of the sliding window mechanism. It is working properly to keep the server's receive buffer filled and to manage the flow of data. The problem is, the sliding window mechanism is only concerned with managing, the buffer-it doesn ' t take to account T He inefficiency of the small segments that result at the window size is micromanaged in the this.
in essence, by sending small window size advertisements We is "winning the battles but losing the war". Early TCP/IP researchers who discovered this phenomenon called it silly Windows syndrome (SWS), a play on the Phra Se "sliding window System" that expresses their opinion on what it behaves when it gets into the this state.
The examples above show how SWS can is caused by the advertisement of small windows sizes by a receiving device. It's also possible for SWS to happen if the sending device isn ' t careful what's it generates segments for transmission , regardless of the state of the receiver ' s buffers.
For example, suppose the client TCP in the example above is receiving data from the sending application in blocks of Ten bytes at a time. However, the sending TCP is so impatient-to-get the data to the client-it took each 10-byte block and immediately PA Ckaged it into a segment, even though the next 10-byte block was coming shortly thereafter. This would the result in a needless swarm of inefficient 10-data-byte segments.
Silly Window syndrome avoidance Algorithms
Since SWS is caused by the basic sliding window system does paying attention to the result of decisions that create SMA ll segments, dealing with SWS are conceptually simple:change the system so, we avoid small window size advertisements, And at the same time, also avoid sending small segments. Since both the sender and recipient of data contribute to SWS, changes is made to the behavior of both to avoid SWS. These changes is collectively termed SWS avoidance algorithms.
Receiver SWS Avoidance
let's start with SWS avoidance by the receiver. As we saw in the initial example above, the receiver contributed to SWS by reducing the size of their receive window to Smal Ler and smaller values due its being busy. This caused the right edge of the sender's send window to move by Ever-smaller increments, leading to smaller and smaller Segments. To avoid SWS, we simply make the rule that the receiver is not update it advertised receive window in such a-to-th Is leaves too little usable windows space on the part of the sender. In other words, we restrict the receiver from moving to the right edge of the window by too small an amount. The usual minimum the edge may be moved is either the value of the MSS parameter, or one-half the buffer size, Whiche Ver is less.
let's see how we might use the" the example above. When the server receives the initial 360-byte segment from the client and can only process bytes, it does not reduce t He window size to 120. It reduces it all the 0, closing the window. It sends this back to the client, which would then stop and not send a small segment. Once the server has removed and bytes from the buffer, it would now have a bytes free, half the size of the buffer. It now opens the window up to the bytes of size and sends the new window size to the client.
It'll continue to only advertise either 0 bytes, or the or more, not smaller values in between. This seems to slow down the operation of TCP, but it really doesn ' t. Because the server is overloaded, the limiting factor in overall performance of the connection are the rate at which the SE RVer can clear the buffer. We is just exchanging many small segments for a few larger ones.
Sender SWS Avoidance and Nagle ' s algorithm
SWS avoidance by the sender was accomplished generally by Imposing "restraint" on the part of the transmitting TCP. Instead of trying to immediately send data as soon as we can, we wait for send until we have a segment of a reasonable size . The specific method for doing the called Nagle's Algorithm , named for its inventor, John Smith. (Just kidding, it was John nagle. J ) Simplified, this algorithm works as follows:
- As long as there is no unacknowledged data outstanding on the connection, as soon as the application Wan TS, data can be immediately sent. For example, in the case of a interactive application like Telnet , a single keystroke can Be "pushed" in a segment. &NBSP
- while there was unacknowledged data, all subsequent data to be s ENT is held in the transmit buffer and not transmitted until either all the unacknowledged data is acknowledged, or we hav E accumulated enough data to send a full-sized (mss-sized) segment. This applies even if a "push" was requested by the user.
This might seem strange, especially the "about buffering data despite a push request! You might think this would cause applications like Telnet to ' break '. In fact, Nagle's algorithm is a very clever method that suits the needs of both low-data-rate interactive applications lik e Telnet and High-bandwidth file transfer applications.
If you were using something like Telnet in where the data is arriving very slowly (humans be very slow compared to Comput ERS), the initial data (first keystroke) can is pushed right away. The next keystroke have to wait for a acknowledgment, but this would probably come reasonably soon relative to how long it Takes to hit the next key. In contrast, more conventional applications that generate data in large amounts would automatically have the data Accumulat Ed into larger segments for efficiency.
Nagle ' s algorithm is actually far more complex than this description, it is topic already getting long. RFC 896 discusses it in (much) more detail.
TCP Silly Window syndrome