Web Performance in a Nutshell: HTTP/2, CDNs and Browser Caching

Erik Witt
Speed Kit Blog
Published in
10 min readMar 24, 2017

--

Successful websites need to be fast, scalable and secure. In this article we survey the state of the art of high-performance websites, in particular SSL encryption, HTTP/2, CDNs and browser caching. We’ll cover the major performance optimizations and show how we did it in our setup.

Building a static website today is easier than ever. There are so many great things you can use to build a stunning landing page, blog or whatever you want. With static site generators like Jekyll, Hugo, Octopress or Hexo you can even take this to the next level and build your site hassling much less with HTML, CSS or JavaScript. Github pages, for example, are rendered with Jekyll.

So almost anyone can build a great website today. There is one last step before becoming rich and famous, though: You need to host the site somewhere on the internet to make it accessible to everyone. That’s going to be just as easy, right? Well, not exactly.

Example of a static website hosted in central Europe and loaded from the US.

Putting the site on a server and setting up a domain for it is simply not enough these days. For a great website to be successful, it needs to excel in three things: speed, scalability and security.

State-of-the-Art Hosting

Especially with static sites, these requirements are not all that hard to achieve — in theory. But let’s take a look at what needs to be done in practice.

HTTPS (SSL/TLS Encryption)

I know what you are thinking: Do I really need SSL for my website? The simple answer is: You do!

HTTPS does not only protect sensitive information like passwords, payment data or simply the user’s identity, but also provides authentication to make users confident they are surfing the legit website. HTTPS further ensures integrity, meaning no one can alter your content by, say, injecting ads without your permission. SSL simply builds trust. Browsers acknowledge that by highlighting secure websites or, in case of Google Chrome, labeling non-HTTPS websites as Not Secure. Also, with all current browsers, HTTPS is mandatory for HTTP/2– and you really want that for speed, but more on that later. Furthmore, Google boosts your SEO rank, if your site uses HTTPS.

Browsers highlight secure websites which builds trust.

Regarding the setup, things have become much easier with the Let’s Encrypt project. They generate certificates for free and there is tooling to fully automate domain authorization and certificate signing. But even with Let’s Encrypt, you still need to figure out how to install certificates on your server (or CDN), make sure the certificates are renewed on time (expiring every 3 month) and probably enforce their usage by configuring HSTS headers to prevent protocol downgrade attacks.

But HTTPS is not only notoriously hard to set up from scratch, it also effects website performance if you don’t take precautions:

  1. Encryption is always hard on the CPU and TLS encryption will certainly add some overhead to your server-side processing. Keep that in mind for server capacity planning. An autoscaling server infrastructure that monitors throughput, deals with load spikes and ensures availability will be discussed below.
  2. The initial TLS handshake adds 2 additional roundtrips to the TCP handshake, which itself already needs 2, resulting in a 4 roundtrip handshake. Depending on the distance between client and server this can amount to more than a second only to set up the first connection. To speed up the handshake, you need to make sure that your server supports TCP Fast Open as well as TLS False Start and Session Resumption. These will halve the time of your handshake. There are even more considerations (e.g. dynamic packet sizing), of which Ilya Grigorik from the Google web performance team gives a great round-up.
  3. The most important optimization, though, is to use HTTP/2 with all its benefits. Getting rid of domain sharding and using a single HTTP/2 connection instead can already save you expensive connection setups.

In essence, HTTPS is required for customer trust and security, but even with state-of-the-art tooling it impedes fast loading times and thus can diminish customer satisfaction. Also, the complexity of setting things up smoothly keeps many website providers from using HTTPS in the first place.

HTTP/2

If you have HTTPS, you definitely want HTTP/2 (also called h2). The second major protocol version of HTTP has lots of optimizations and features that fix common HTTP/1.1 problems. It comes with server push to transfer multiple resources for a single request, header compression to drive down request and response sizes and also request pipelining and multiplexing to send arbitrary parallel requests over a single TCP connection. With server push, you can for example push the CSS and JS right after shipping your HTML without waiting for the actual request.

Multiplexing and pipelining make a huge difference, in particular for websites with many assets such as images or videos. Take a look at this live comparison as well as the following example:

An example showing the impact of pipelining and multiplexing in HTTP/2.

You can see how HTTP/1.1 is limited to 6 concurrent connections, each of which can only load a single resource at a time. With HTTP/2 in contrast, all resources are fetched in parallel.

But even HTTP/2 cannot trick physics and thus page load times are ultimately driven by the latency of each individual request. The only way to decrease that latency is to bring data closer to your clients. This brings us to the topic of caching.

Content Delivery Networks (CDNs)

In a nutshell, CDNs are globally distributed cache infrastructures that store mostly static data in various locations. If a user accesses a CDN-backed website, she connects to the nearest edge location via Geo-DNS or IP Anycast and gets all cached resources from there. Using this nearby copy of the data highly decreases request latency and therefore page load time.

How CDNs decrease the distance between data and clients globally.

The CDN has to fetch all uncached (or even uncachable) resources from the application server. One advantage of proxying all requests through the CDN is a faster connection setup for clients. Because the CDN terminates HTTPS connections, a client only performs a fast TLS handshake with the nearby CDN edge server. The connection from the CDN to the server again is HTTPS and should be configured as a persistent connection to save expensive handshakes.

The CDN terminates SSL for fast handshakes and should use persistent backend connections.

Even though CDN edge servers are generally nearby, even a few tens of milliseconds per request can add up to a considerable delay. So let’s take a look at caching on the next level of the web stack.

Browser Caching

The fastest request is the one you don’t make.

Browser caching is extremely efficient, especially for repeated page views and navigating through a website. Every request for a cached resource returns instantly and thus does not use a connection nor any bandwidth.

The browser cache is an expiration-based cache that operates based on cache-control headers. Every cacheable object needs a time-to-live (TTL) set by the server via the max-age header. This value represents the time the cached entry is considered up-to-date. When the resource is requested after the TTL has expired, it has to be retrieved from the original server to make sure it is up-to-date. For versioned resources with ETag header or Last-Modified, bandwidth is saved by transmitting a Status 304 message (Not Modified) instead of the actual resource whenever the version hasn’t changed.

The browser cache is great for storing assets like CSS, JavaScript, fonts, icons and even images, because they are slow to retrieve over the network and usually do not change often, if at all. Problems arise, however, when those assets change before their TTL expires, because then the cache returns stale resource versions. In order to cache these mutable assets, too, we have to use asset hashing. That means, a hash of the asset’s content is appended to its name and all references to it. If the asset changes, its name and references change as well and the resource can be cached safely. The good news is that build tools like webpack can already do asset hashing for you.

In summary, it is essential to use the browser cache correctly and for as many resources as possible. Configuring cache-control headers on the server side as well as setting up asset hashing can be quite complex, but it is an absolute must for competitive website performance.

Load-Balancing

The obvious worst case for a website is an overload situation during which the server has to drop user requests or even completely breaks down.

In order to cope with a high number of visitors or load spikes, we need our server infrastructure to scale horizontally.

Scalable backend setup with load balancer distributing requests to stateless applications servers.

The setup above shows a load balancer that uniformly distributes request over a number of stateless application servers. Statelessness is key here, because the load balancer does’t need to handle sticky sessions and the servers do not have to communicate with each other. Furthermore, it ensures fault tolerance, since failed servers are detected by the load balancer and removed from the routing protocol. Advanced load balancers can even use factors such as server capacity and response time to choose a server.

Stateless servers are also easy to scale dynamically depending on the load. To scale up, you simply start an additional server and register it at the load balancer. To scale down, the load balancer drains all active connections to the server before shutting it down. This dynamic scalability needs to be combined with server resource monitoring to implement autoscaling.

Hosting Summary

To wrap things up, a website can be made fast, scalable and secure by combining these technologies:

  • HTTPS for security
  • HTTP/2 for performance and efficiency
  • CDN and browser caching for performance
  • Stateless servers, load balancing and autoscaling

Admittedly, this is quite a lot to deal with, if all you want is to bring your content online. So why not have someone else take care of all the hassle?

Our Setup

With Baqend, we tried to make setting up state-of-the-art web hosting for a static site as easy as possible. We are a spin-off from database and web performance research and heavily focus on performance and scalability. There is a short blog post that will walk you through the process using an example website.

Clone Your Website

If you want try out how your website could perform in this setup, execute the following commands in your console (with wget installed). The example clones this blogpost to speed up Medium but you can replace the url with your own website:

> npm install -g baqend
> baqend register
(your credentials...)
> wget -E -H -k -p -nd -P www https://medium.baqend.com/hosting-lessons-learned-6010992eb257
> mv www/hosting-lessons-learned-6010992eb257.html www/index.html
> baqend deploy
> baqend open

The commands will (1) install the Baqend CLI, (2) register a new account with your given credentials, (3) clone your website into your current directory, (4) rename the html to index.html, (5) deploy the website to your free Baqend app and (6) open the website with the app’s default url.

If your website ins’t too complex, this will clone and deploy your main page. In our test we cloned this Medium post to https://clone-test.app.baqend.com/. It makes quite a performance difference:

Original load time vs load time of cloned website on Baqend (loaded from the US).

If your website is comparably simple you could actually use this as a very simple way to speed up your landingpage.

Obviously, you can optimize the cloned websites even more. The wget command isn’t perfect in this respect, after all. So let’s check out how fast websites in this setup actually get in the next section.

Performance Measurements & Comparison

There are many great tools that let you analyze your website performance like WebPageTest, Google PageSpeed or GTmetrix. We like to use GTmetrix because of its extensive report features and good rendering performance. Running it from Vancouver, Canada (with our backend standing in Frankfurt, Germany), we get an inital load time of 300–400 ms for our example website everymillisecondcounts.eu (see GTmetrix report). A more comprehensive overview is given by this timing table:

Timing for everymillisecondcounts.eu measured with GTmetrix from Vancouver.

For the second visit of the website, all resources except for the HTML come from the browser cache so that the page loads virtually in an instant.

To put Baqend’s hosting performance into perspective, we also tried a few other common configurations offered by other hosters:

All load times measured from Canada with GTmetrix.
  • With standard hosting on Amazon S3 in Frankfurt (Germany) the avg. load time from the US is 3,22s with HTTP.
  • Adding SSL to the S3-hosted version, the initial load time climbs to 4,03s.
  • Hosted on Baqend (also located in Frankfurt), the avg. page load time is cut down to 440 ms for HTTP.
  • With TLS encryption enabled, avg. page load time goes down even further to 350 ms because of the use of HTTP/2.

These performance improvements translate very well to more complex and even dynamic websites, because Baqend also caches dynamic data such as complex database queries — without any staleness for clients. For a live comparison with other providers, see our performance shoot-out:

The hosting setup discussed in this post optimizes network as well as backend performance. If you feel your frontend performance needs improvements, take a look at another our blogpost on the most important web-performance techniques:

--

--