It’s been a while, perhaps, since a far-reaching development in the world of internet technology has caused so much, well… public-facing controversy — but, then again, it has also been over 15 years since the last version of the HTTP protocol was released, that being HTTP version 1.1 which went fully live way back in 1999…
I’m referring, of course, to HTTP/2, otherwise known as HTTP 2.0, the latest version of HTTP (by far one of the most important elements of the world wide web). Doing a quick Google search for the term results in dozens of defensive, critical, yet otherwise insightful perspectives on the issue of HTTP/2 from both bloggers and corporations alike.
But, why so much fear and loathing? The issue, ultimately, is due to HTTP/2’s promise of “faster” and “better” performance when it comes to loading websites, and therefore, this has thrown doubt on which technologies (etc) could be becoming less relevant, while also garnering criticism over the steps that HTTP/2 does and does not take. Indeed, much of the back and forth regarding HTTP/2 gets into the corporate-political world of IT… (ehh).
“In general, sites will be more secure and will load faster. The protocol does not bring any cardinal changes, possibly because such changes are more difficult to implement—both technically and politically. This is why HTTP/2 will possibly not serve us that long.
The world of technology is evolving more rapidly each year, so we might need something else in a few years. My personal hope is that the next protocol will be more flexible, and braver in meeting the challenges of changing technologies.” — Lexy Mayko, Sitepoint
Just like the issue of TTFB, many of the “technical” arguments are based in theory, while others are based in practicality. But at the end of the day, practicality ALWAYS wins when it comes to website performance (much to the chagrin of IT nerds everywhere) as its the customers who are both judge and jury of a website’s long-term success.
Ultimately, HTTP/2 is a great improvement to the way that websites (etc) are able to send data to visitors — and that is truly a fact on which pretty much everyone agrees; much of the defensiveness comes in as various companies now attempt to re-assert their relevance in the “performance” space — with a special nod to CDNs.
Below is a brief breakdown of the biggest issues/controversies involved with the release of HTTP/2:
1. Who the heck invented HTTP/2? Unlike past versions of HTTP that were largely a coordinated (and slow) effort between members of the IETF, HTTP/2 was almost entirely designed (and rather quickly) by Google as part of their open-source SPDY protocol. While beggars can’t be choosers, early critics of HTTP/2 such as Poul-Henning Kamp — the man behind the popular Varnish cache accelerator — complained that the HTTP/2 release was hurried and did not allow much input from other parties (besides Google), and that it is perhaps not as forward-thinking or simplified in design as it probably should be (more of an HTTP 1.2 than 2.0, etc). Initially, HTTP/2 (SPDY) also used GZIP header compression across SSL, which ultimately was found to be a serious security risk, so Google invented yet another compression method called HPACK as a solution. Although Google (and some other big players) initially wanted to “require” an SSL connection in order for HTTP/2 to function properly, this was ultimately not enforced by the IETF (but is still almost entirely required, as major browsers like Chrome and other web services have decided to only deliver data over HTTP/2 if an SSL connection is present).
2. Are CDNs still relevant? Much of the CDN debate began with an article posted by ThemeFoundry back in 2014, in which Zack Tollman “showed” that ditching MaxCDN helped their Australian website load a bit quicker. Of course, this was before MaxCDN had implemented SPDY support, not to mention the inherent performance issues with Australia and New Zealand. Still, the word was out and with increased education in regard to HTTP/2’s capabilities — specifically, that it supports multi-plexing a.k.a. downloading multiple website resources at the same time without domain sharding — more bloggers and developers began to question the need for a CDN in the last few years. It’s no wonder that the likes of Akamai, MaxCDN, and KeyCDN (etc) quickly produced overviews earlier this year regarding why CDNs are still valuable in the post-HTTP/2 world.
In short, HTTP/2 does NOT eliminate the need or benefits of a CDN, as domain sharding was never the only benefit of CDN technology; rather, other major benefits such as off-loading resources from your origin server and eliminating network latency are still very relevant (take a look at the top 100 Alexa websites if you don’t believe me). If anything, multi-plexing will work even BETTER than SPDY now that HTTP/2 added multi-host multi-plexing abilities!
3. Speed tests (etc) are out of date. Saving an exhaustive breakdown of HTTP/2 features for another blog post, it is important to acknowledge that, because of the release of HTTP/2, most website “audit” or “speed” tests online are now out of date. In other words, many of the suggestions and recommendations that these tools are displaying NO LONGER APPLY to websites being delivered over the HTTP/2 protocol. This is probably one of the BIGGEST issues with the transition to HTTP/2 that nobody seems to be mentioning, and it is something that is going to cause LOTS of headaches for several years to come. From things like image sprites to file concatenation, there are several “old-fashioned” optimization tricks for HTTP 1.X that are not only irrelevant now, but can actually cause harm to your website’s performance if HTTP/2 is indeed enabled. Hopefully, these types of auditing tools and apps roll out a “choose your HTTP version” feature ASAP, otherwise its going to lead to months and years of arguments and misunderstandings between web hosts, web designers, SEO consultants, and beyond.
In the world of consumer technology, it is extremely common for fanboys and/or “gangs” of various allegiances to trash-talk others, if only in a pseudo-polite-yet-smug way (think, Matt Cutts). When it comes to HTTP/2, there are so many viewpoints because so many developers and companies have so much to prove: CloudFlare has to emphasize their service as more important than CDNs, then CDN companies have to stress their ongoing importance for performance (above), and Varnish cache has to criticize HTTP/2 for not inviting them to the “party” (among other examples… and that’s not even getting into the disturbing conspiracy theories that HTTP/2 is yet another form of government surveillance).
At the end the day, pretty much every major service in the “stack” (from DNS, to server, to cache) is still relevant in the post-HTTP/2 world; however, the only real difference is that cheap HTTP 1.X tricks and dirty frontend “hacks” are no longer required to fool web browsers into loading your website more quickly!