Image may be NSFW.
Clik here to view.Defined here by Wikipedia.org, “HTTP pipelining is a technique in which multiple HTTP requests are written out to a single socket without waiting for the corresponding responses.”
Mozilla talks a bit more about the topic: “Normally, HTTP requests are issued sequentially, with the next request being issued only after the response to the current request has been completely received. Depending on network latencies and bandwidth limitations, this can result in a significant delay before the next request is seen by the server.”
I have accepted the fact that HTTP pipelining is pretty much disabled in all modern browsers, but that doesn’t mean I have to like it!
I have a widget in Firefox that allow me to bypass this missing “feature” and it sure seems to speed up my browsing quite a bit. However, why can’t everyone get together and work this problem out so we don’t need extensions/widgets/hacks to get around the limitations?
I attempted to harass Microsoft about it and never received an answer (I didn’t really harass them per say). Firefox isn’t mum on the subject (here), but it seems to come down to compatibility issues with certain servers, routers, et cetera in some specific cases (even if the HTTP/1.1 spec allows it).
So what does a web developer do (programmatically)?
Do we just accept the fact and move on or is there something we can do about it? How can we speed up our page loads to a world that can’t use pipelining?
It turns out there is a relatively simple way to “fake HTTP pipelining”. When I read through the article “Optimizing Page Load Time“, I had a very revealing moment of self-inflicted-disrespect. The solution is so obvious, but it never dawned on me previously. Why not simply source content on the page from different locations? It doesn’t even have to be different servers, just different domains. Pretty simple right?
For example, we could do something like this for a single web page:
- Static Images: images.dustinweber.com
- Javascript Includes: includes.dustinweber.com
- CSS: css.dustinweber.com
- Static Content: static.dustinweber.com
- Dynamic Content: dynamic.dustinweber.com
Now this is a pretty extreme example that I wouldn’t recommend for production (except in very specific cases), but let me explain what happens in simple terms. Instead of your browser making a request to one domain for all the content, data, files, and everything for a page; it splits up the requests amongst the various sub domains (of which could be hosted separately or together).
What does splitting up the content get us?
The advantage is that the browser isn’t sitting around waiting on previous requests to complete before moving on to the next item. It really only makes sense for larger pages. In fact there is a drawback, according to Aaron, “Beware that each additional hostname adds the overhead of an extra DNS lookup and an extra TCP three-way handshake. If your users have pipelining enabled or a given page loads fewer than around a dozen objects, they will see no benefit from the increased concurrency and the site may actually load more slowly. The benefits only become apparent on pages with larger numbers of objects. Be sure to measure the difference seen by your users if you implement this.”
Perhaps now you can consider playing around with this idea a bit on your own. Given plenty of tinkering time and careful examination, it could help decrease page load times noticeably.
If you’d like some more tips on this subject, check out Optimizing Page Load Time.