September 10, 2013

Why HTTP/2.0? A Perspective

When setting up for the HTTP meeting in Hamburg, I was asked, reasonably enough, what the group is doing, why it was important, and my prognosis for its success.  It was hard to explain, so I thought I'd try to write up my take "why HTTP/2.0?"  Corrections, additions welcome.

HTTP Started Simple

The HyperText Transfer Protocol when first proposed was a very simple network protocol, much simpler than FTP (File Transfer Protocol), and quite similar to Gopher. Basically, the protocol is layered on the Transport Control Protocol (TCP)  which sets up bi-directional reliable streams of data. HTTP/0.9 expected one TCP connection per user click to get a new document. When the user clicks a link, it takes the URL of the link (which contains the host, port, and path of the link) and
  1. Using DNS, client get the IP address of the server in the URL
  2. opens a TCP connection to that server's address on the port named in the URL
  3. client writes "GET" and the path of the URL onto the connection
  4. the server responds with HTML for the page
  5. the client reads the HTML and displays it
  6. the connection is closed
Simple HTTP was adequate, judging by latency and bandwidth, as the overhead of HTTP/0.9 was minimal; the only overhead is the time to look up the DNS name and set up the TCP connection. 

Growing Complexity

HTTP got lots more complicated; changes were reflected in a series of specifications, initially with HTTP/1.0, and subsequently HTTP/1.1. Evolution has been lengthy, painstaking work; a second edition of the HTTP/1.1 specification (in six parts, only now nearing completion) has been under development for 8 years. 

Adding Headers

HTTP/1.0 request and response (steps 3 and 4 above) added headers: fields and values that modified the meaning of requests and responses. Headers were added to support a wide variety of additional use cases, e.g., adding a "Content-Type" header to allow images and  other kinds of content, a "Content-Transfer-Encoding" header and others to allow optional compression, quite a number of headers for support of caching and cache maintenance, a "DNT" header to express user privacy preferences.

While each header has its uses and justification, and many are optional, headers add both size and complexity to every HTTP request. When HTTP headers get big, there is more chance of delay (e.g., the request no longer fits in a single packet), and the same header information gets repeated.

Many More Requests per Web Page

The use of HTTP changed, as web expressiveness increased. Initially NCSA Mosaic led by supporting embedded  images in web pages, doing this by using a separate URL and HTTP request for each image.  Over time, more elements also have been set up as separate cachable resources, such as style sheets, JavaScript and fonts. Presently, the average popular web home page makes over 40 HTTP requests 

HTTP is stateless

Neither client nor server need to allocate memory or remember anything from one request/response to the next. This is an important characteristic of the web that allows highly popular web sites to serve many independent clients simultaneously, because the server need not allocate and manage memory for each client.  Headers must be repeatedly sent, to maintain the stateless nature of the protocol.

Congestion and Flow Control

 Flow control in TCP, like traffic metering lights, throttles a sender's output to match the receivers capability to read. Using many simultaneous connections does not work well, because the streams use the same routers and bridges which must manage the streams independently, but the TCP flow control algorithms do not, cannot, take into account the other traffic on the other connections. Also, setting up a new connection potentially involves additional latency, and opening encrypted connections is even slower since it requires more round-trips of communication of information.

Starting HTTP/2.0

While these problems were well-recognized quite a while ago, work on optimizing HTTP labeled "HTTP-NG" (next generation) foundered. But more recent work (and deployment) by Google on a protocol called SPDY shows that, at least in some circumstances, HTTP can be replaced with something which can improve page load time. SPDY is already widely deployed, but there is an advantage in making it a standard, at least to get review by those using HTTP for other applications. The IETF working group finishing the HTTP/1.1 second edition ("HTTPbis") has been rechartered to develop HTTP/2.0 which addresses performance problems. The group decided to start with (a subset of) SPDY and make changes from there.

HTTP/2.0 builds on HTTP/1.1; for the most part, it is not a reduction of the complexity of HTTP, but rather adds new features primarily for performance.

Header Compression

The obvious thing to do to reduce the size of something is to try to compress it, and HTTP headers compress well. But the goal is not just to speed transmission, it's also to reduce parse time of the headers. The header compression method is undergoing significant changes.

Connection multiplexing

One way to insure coordinated flow control and avoid causing network congestion is to "multiplex" a single connection. That is, rather than open 40 connections, only open one per destination. A site that serves all of its images and style sheets and JavaScript libraries on the same host could send the data for the page over the same connection. The only issue is how to coordinate independent requests and responses which can either be produced or consumed in chunks.

Push vs. Pull

A "push" is when the server sends a response that hadn't been asked for. HTTP semantics are strictly request followed by response, and one of the reasons why HTTP was considered OK to let out through a firewall that filtered out incoming requests.  When the server can "push" some content to clients even when the client didn't explicitly request it, it is "server push".  Push in HTTP/2.0 uses a promise "A is what you would get if you asked for B", that is, a promise of the result of a potential pull. The HTTP/2.0 semantics are developed in such a way that these "push" requests look like they are responses to requests not made yet, so it is called a "push promise".  Making use of this capability requires redesigning the web site and server to make proper use of this capability.

With this background, I can now talk about some of the ways HTTP/2.0 can go wrong. Coming up!

1 comment:

  1. Correction: "JavaScript libraries" instead of "java libraries" ಠ_ಠ

    ReplyDelete

Medley Interlisp Project, by Larry Masinter et al.

I haven't been blogging -- most of my focus has been on Medley Interlisp. Tell me what you think!