HTTP 1.0 vs 1.1Proxy support and the Host field:
HTTP 1.1 have a required Host header by spec.
HTTP 1.0 does not officially require a Host header, but it doesn ' t hurt to add one, and many applications (proxies) expect To see the Host header regardless of the protocol version.
Example:
get/http/1.1
Host:www.blahblahblahblah.com
This header was useful because it allows you to route a message through proxy servers, and also because your Web server can Distinguish between different sites on the same server.
So this means if you had blahblahlbah.com and helohelohelo.com both pointing to the same IP. Your Web server can use the Host field to distinguish which site of the client machine wants.
Persistent connections:
HTTP 1.1 also allows you to having persistent connections which means that can has more than one request/response on th E same HTTP connection.
In HTTP 1.0-had to open a new connection for each request/response pair. And after each response the connection would is closed. This leads to some big efficiency problems because of TCP Slow Start.
OPTIONS Method:
- http/1.1 introduces the OPTIONS method. An HTTP client can use this method to determine the abilities of the HTTP server. It ' s mostly used for cross Origin Resource sharing in Web applications.
Caching:
HTTP 1.0 had support for caching via the header:if-modified-since.
HTTP 1.1 expands on the caching support a lot by using something called ' entity tag '. If 2 Resources is the same, then they would have the same entity tags.
HTTP 1.1 also adds the If-unmodified-since, If-match, if-none-match conditional headers.
There is also further additions relating to caching like the Cache-control header.
Continue Status:
- There is a new return code in http/1.1 Continue. This was to prevent a client from sending a large request when that client was not even sure if the server can process the R Equest, or is authorized to process the request. In this case the client sends only the headers, and the server would tell the client of the Continue, go ahead with the body.
Much MORE:
- Digest Authentication and proxy authentication
- Extra New Status Codes
- Chunked transfer encoding
- Connection Header
- Enhanced compression support
- Much Much more.
What is the key differences to http/1.x? At a high level, HTTP/2:
- is binary, instead of textual
- is fully multiplexed, instead of ordered and blocking
- Can therefore use one connection for parallelism
- Uses header compression to reduce overhead
- Allows servers to ' push ' responses proactively into client caches
Why is HTTP/2 binary?
Binary protocols is more efficient to parse, more compact ' on the wire ', and most importantly, they is much less error-p Rone, compared to textual protocols like http/1.x, because they often has a number of affordances to ' help ' with things l Ike whitespace handling, capitalization, line endings, blank lines and so on.
For example, http/1.1 defines four different ways to parse a message; In HTTP/2, there ' s just one code path.
It's true that http/2 isn ' t usable through telnet, but we already has some tool support, such as a Wireshark plugin.
Why is HTTP/2 multiplexed?
http/1.x have a problem called "head-of-line blocking," where effectively only one request can be outstanding on a connecti On at a time.
http/1.1 tried to fix this with pipelining, but it didn ' t completely address the problem (a large or slow response can STI ll block others behind it). Additionally, pipelining have been found very difficult to deploy, because many intermediaries and servers don ' t process it correctly.
This forces clients to use a number of heuristics (often guessing) to determine what requests to put on which connection T O The Origin when; Since it ' s common for a page to load ten times (or more) the number of available connections, this can severely impact perf Ormance, often resulting in a "waterfall" of blocked requests.
Multiplexing addresses these problems by allowing multiple request and response messages to is in flight at the same time; It's even possible to intermingle parts of a message with another on the wire.
This, in turn, allows a client-to-use just one connection per origin to load a page.
Why just one TCP connection?
With HTTP/1, browsers open between four and eight connections per origin. Since Many sites use multiple origins, this could mean the a single page load opens more than thirty connections.
One application opening so many connections simultaneously breaks a lot of the assumptions that TCP is built upon; Since each connection would start a flood of data in the response, there's a real risk that buffers in the intervening NETW Ork would overflow, causing a congestion event and retransmits.
Additionally, using so many connections unfairly monopolizes network resources, "stealing" them from other, better-behaved Applications (e.g., VoIP).
What ' s the benefit of Server Push?
When a browser requests a page, the server sends the HTML on the response, and then needs to wait for the browser to parse The HTML and issue requests for all of the embedded assets before it can start sending the JavaScript, images and CSS.
Server Push potentially allows the server to avoid this round trips of delay by "pushing" the responses it thinks the Clien t need into its cache.
However, pushing responses is not "magical" –if used incorrectly, it can harm performance. Correct use of the Server Push is a ongoing area of experimentation and.
Why do we need header compression?
Patrick McManus from Mozilla showed this vividly by calculating the effect of headers for an average page load.
If you assume that a page have about assets (which was conservative in today's Web), and each request has 1400 bytes of h Eaders (again, not uncommon, thanks to Cookies, Referer, etc.), it takes at least 7-8 round trips to get the headers out " The Wire. That's not counting response Time-that's just to get them out of the client.
This is because of TCP ' s Slow Start mechanism, which paces packets off on new connections based in how many packets has B Een acknowledged–effectively limiting the number of packets that can is sent for the first few round trips.
In comparison, even mild compression on headers allows those requests to get onto the wire within one roundtrip–perhaps Even one packet.
This overhead was considerable, especially when you consider the impact upon mobile clients, which typically see round-trip Latency of several hundred milliseconds, even under good conditions.
Why HPACK?
SPDY/2 proposed using a single GZIP the context in each direction for header compression, which is simple to implement as wel L as efficient.
Since then, a major attack have been documented against the use of the stream compression (like GZIP) inside of encryption; CRIME.
With CRIME, it's possible for a attacker who had the ability to inject data into the encrypted stream to "probe" the Plai ntext and recover it. Since the Web, JavaScript makes this possible, and there were demonstrations of recovery of the cookies and Authentica tion tokens using CRIME for tls-protected HTTP resources.
As a result, we could not use GZIP compression. Finding no other algorithms the were suitable for this use case as well as safe to use, we created a new, header-specific Compression scheme that operates at a coarse granularity; Since HTTP headers often don ' t change between messages, this still gives reasonable compression efficiency, and is much SA Fer.
Reference documents
- Https://stackoverflow.com/questions/246859/http-1-0-vs-1-1
- https://http2.github.io/faq/#why-do-we-need-header-compression
http/1.0 vs http/1.1 vs HTTP/2