Web sites that use chunked code do not seem to be many, except those that use gzip compression, for example: Google.com, and most of the PHP forums that open gzip compression.
According to my understanding, the main advantage of using chunked coding is that some programs in the process of operation, can dynamically output content.
For example, to handle a one-hour operation in the background, you don't want the user to wait one hours to see the result. In this case, the content can be divided into pieces by chunked encoding, the user can receive the latest processing results at any time.
The ASP turns off the cached output mode, which is chunked encoded. (Response.Buffer = False)
And every time the Response.Write, is a chunked, so do not use too often oh, otherwise the chunk quantity too much, the extra data too waste space.
If you want to understand the specific coding structure of chunked, it is convenient to turn off cache debugging with ASP. :)
Let's take a look at the definition of chunked in RFC2616:
Chunked-body = *chunk
Last-chunk
Trailer
CRLF
Chunk = chunk-size [Chunk-extension] CRLF
Chunk-data CRLF
Chunk-size = 1*hex
Last-chunk = 1* ("0") [Chunk-extension] CRLF
chunk-extension= * (";" chunk-ext-name ["=" chunk-ext-val])
Chunk-ext-name = Token
Chunk-ext-val = Token | Quoted-string
Chunk-data = chunk-size (octet)
Trailer = * (Entity-header CRLF)
Let's simulate the data structure:
[Chunk size] Car [Chunk data body] Car [Chunk size] Car [Chunk data body] Car [0] Car
Note that the chunk-size is expressed in 16 ASCII code, such as 86AE (the actual 16 should be: 38366165), calculated as the length should be: 34478, indicating that there is a continuous 34478 bytes of data from the carriage return.
Tracking the www.yahoo.com return data, found that in the chunk-size, there will be more space. may be fixed length of 7 bytes, less than 7 bytes, to make up the space, the space of the ASCII code is 0x20.
Here is the pseudocode for the decoding process:
Length: = 0//used to record decoded data body lengths
Read Chunk-size, chunk-extension (if any) and crlf//reads the block size for the first time
while (Chunk-size > 0) {//loop until the read block size is 0
Read Chunk-data and crlf//reads the block data body and ends with a carriage return
Append Chunk-data to entity-body//add block data to decoded entity data
Length: = long + chunk-size//update the decoded entity lengths
Read Chunk-size and crlf//reading new block size
}
Read entity-header//The following code reads all header tags
while (Entity-header not empty) {
Append Entity-header to existing header fields
Read Entity-header
}
Content-length: = Add content length in length//header tag
Remove transfer-encoding from transfer-encoding//header tag of remove "chunked"
Have time to study how gzip+chunked is encoded, it is estimated that each chunk block for a gzip independent compression.
With chunked, you will naturally get a slight discount on performance because there is a bit more to the cost than the normal data body.
However, in some cases, it is necessary to use the block output, which is the last resort.