All in the <head> – Ponderings and code by Drew McLellan –

How To Make Your Website Fast

Since launching the new Greenbelt website this week, one thing a lot of people have commented on and asked about is the speed. It’s noticeably fast. I’ve never heard someone complain that they really like a site, but goshdarnit if it weren’t so fast. Fast beats slow, every time. So we wanted to make it fast from the outset.

That said, boring doesn’t beat fast, and neither does not reflecting the brand. Greenbelt Festival is a buzzing, vibrant event. It also has a strong identity developed by our designer on this project, Wilf Whitty at Ratiotype. A craigslist-style non-design wasn’t going to cut it, no matter how fast that may be. As a result, the site is full of big photos and iconography, even video and audio. And it’s still fast.

I don’t even remotely claim to be any sort of authority on the subject, but I can tell you what worked for us.

Start with good hosting

It frequently surprises me how little some designers and developers appear to care about the quality of their hosting. They’ll spend days, weeks, months crafting a site and then launch it onto $3 per month crappy shared hosting.

It should go without saying that if you’re paying $3 per month for hosting, that hosting is going to be over-sold. Putting networked hardware in data centres, keeping it cooled, powered and staffed costs quite a lot of money. Simple economics dictate that if you’re not paying very much money for that service, then the hosting company are going to have to make it up on volume. That means lots of customers per server – probably more customers per server than will be acceptable if you care about the response time of your website.

A reasonable rule of thumb is that shared hosting will not be fast. If you care about speed you need to think about a virtualised server (VPS-style, cloud or traditional) which has CPU and RAM resources reserved for it, not in contention with other customers. If you want more grunt, a dedicated server is a good option.

The Greenbelt site is on a dedicated server with Memset, whose data centres are located here in the UK, geographically close to the majority of the site’s traffic. As a straightforward PHP and MySQL site with reasonably predictable traffic and no need to scale up at the drop of a hat, there’s insufficient benefit to using dynamically provisioned cloud hosting. Just a good quality, reasonably priced, solid dedicated box with a high quality, reliable hosting company. Not glamourous, just smart.

Cache it all the way

I’ve become a massive fan of Varnish of late. It’s an HTTP cache (or reverse proxy) that sits on port 80 in front of your web server. If the web server’s response is cachable, it keeps a copy in memory (by default) and serves it up the next time that same page is requested. Done right, it can dramatically reduce the number of requests hitting your backend web server, whilst serving precompiled pages super-fast from memory.

Good use of Varnish can make your site much faster, however, it is no silver bullet. The caveat “if the web server’s response is cachable” turns out to be a very important one. You really need to design your site from the ground up to use a front end cache in order to make the best use of it.

As soon as you’ve identified the user with a cookie (including something like a PHP session, which of course uses cookies) then the request will hit your backend web server. Unless configured otherwise (as we have) that would include things like Google Analytics cookies, which of course, would be every request from any JavaScript-enabled browser. If you static assets (images, CSS, JavaScript) from the same domain, by default the cache will be blown on those, too, as soon as a cookie is set. So you have to design for that.

So while Varnish will help to take the load and shorten response times on common pages like your site’s front page, you can’t rely on it as an end-all solution for speeding up a slow site. If your backend app is slow, your site will still be slow for a lot of requests.

It’s a bit like putting WP Super Cache on a WordPress site. It will mask the underlying issue to an extent, but it won’t solve the underlying problem.

Your CMS or app has to be fast

The Greenbelt site runs on a custom CMS. The details of why (people always ask, as if it were heretical for developers to, you know, write their own code) are probably best saved for another post.

When developing the CMS, I set a target time for each page to be compiled, and had the code time itself and output the result at the bottom of the page. Working locally, on a MacBook Pro, obviously the build times would be significantly slower than the production web server, but the key is relative speed between pages. On my dev system, I wanted to have a regular page build in less than 0.01 seconds, and only to go above that if absolutely necessary for complex pages. The front page – an important one for speed – builds in around 0.003 seconds.

My constantly outputting the build time, I was able to keep track of the implications of every bit of code as I was writing it – which is absolutely the best time to fix any issues.

The general approach is the same as taken in Perch – do as much of the work at edit time as possible. When an author writes a blog post using Textile markup, we translate it to HTML and store it that way too. When a content-based page is published, we compile it against its templates, and store a copy of each region as HTML. At runtime, we just perform a simple query to retrieve the precompiled parts and assemble them into an otherwise dynamic page.

Anything that happens at runtime that is expensive to produce and doesn’t need to be bang-up-to-date gets cached for at least an hour. That includes things like search facet displays from Solr, the latest tweet from Twitter, blog post listings. If the result of an action is likely to be the same the next time it’s performed, do it once and cache it for a while. As I said, cache it all the way.

Optimise the front end

The majority of the time between the user requesting a page and it finishing loading is spent not at the server, but in the web browser. Entire books have been written about optimising the load time of your pages. All I can say is read them and implement all the advice that applies to you. It’s not ultra-nerdery for bored front end engineers, this stuff actually works.

Some key tools I found useful were Google Page Speed for monitoring testing, and the Network panel in webkit’s Web Inspector tools. I used Dustin’s script.js for asynchronous JavaScript loading, which I found to be much faster in practise than Steve Souders ControlJS, although not without a few bugs in older IEs.

I combined most of my JavaScript and minified it using YUI Compressor as a build option on the server. I found that the gains from minifying CSS (just a few kilobytes) weren’t worth the loss of line numbers when debugging.

All the site’s images are managed by a new Media Management System, which I’ll write about another time. Those are all served from subdomains (m1 – m4.greenbelt.org.uk) and are handled by nginx rather than the Apache 2 server that handles the PHP page requests. The routing of the requests to different backend servers is handled in the Varnish configuration.

Other shared static resources (like a copy of jQuery and script.js itself) are served from another subdomain, again through nginx and Varnish.

Why the subdomains? Despite the requests ending up on the same server, the subdomains help increase the number of resources the browser will download in parallel, as browsers limit on a per-domain basis. I may have gone a bit overboard on the subdomains, truth by told, but this one site is part of a larger system of sites and apps, and it serves a broader purpose.

Depending on the width of your browser window, Page Speed ranks the front page at around 92%. The points are docked (and change) due to the ‘scaled images’ rule. The rule says you shouldn’t have larger images that are scaled down in your HTML. Instead you should scale the images first and display them at 100%. As this site has a responsive layout, the images scale to fit at any window size, so that rule is a red herring in this case.

Follow the rules as best you can, but remember it’s fine to ignore ones that simply don’t apply.

To conclude

That was a lot of words to explain what I hope is a simple point. There’s no silver bullet to making a slow site fast. You must take a holistic approach. High performance runs the entire way through from the hardware it’s hosted on, through the app that builds the pages, to the server software that delivers the pages and the front end code that displays them in a browser. Speed is a feature that you must design, not just a bit of configuration done at the end.