All in the <head>

– Ponderings & code by Drew McLellan –

– Live from The Internets since 2003 –

About

How To Make Your Website Fast

16 March 2012

Since launching the new Greenbelt website this week, one thing a lot of people have commented on and asked about is the speed. It’s noticeably fast. I’ve never heard someone complain that they really like a site, but goshdarnit if it weren’t so fast. Fast beats slow, every time. So we wanted to make it fast from the outset.

That said, boring doesn’t beat fast, and neither does not reflecting the brand. Greenbelt Festival is a buzzing, vibrant event. It also has a strong identity developed by our designer on this project, Wilf Whitty at Ratiotype. A craigslist-style non-design wasn’t going to cut it, no matter how fast that may be. As a result, the site is full of big photos and iconography, even video and audio. And it’s still fast.

I don’t even remotely claim to be any sort of authority on the subject, but I can tell you what worked for us.

Start with good hosting

It frequently surprises me how little some designers and developers appear to care about the quality of their hosting. They’ll spend days, weeks, months crafting a site and then launch it onto $3 per month crappy shared hosting.

It should go without saying that if you’re paying $3 per month for hosting, that hosting is going to be over-sold. Putting networked hardware in data centres, keeping it cooled, powered and staffed costs quite a lot of money. Simple economics dictate that if you’re not paying very much money for that service, then the hosting company are going to have to make it up on volume. That means lots of customers per server – probably more customers per server than will be acceptable if you care about the response time of your website.

A reasonable rule of thumb is that shared hosting will not be fast. If you care about speed you need to think about a virtualised server (VPS-style, cloud or traditional) which has CPU and RAM resources reserved for it, not in contention with other customers. If you want more grunt, a dedicated server is a good option.

The Greenbelt site is on a dedicated server with Memset, whose data centres are located here in the UK, geographically close to the majority of the site’s traffic. As a straightforward PHP and MySQL site with reasonably predictable traffic and no need to scale up at the drop of a hat, there’s insufficient benefit to using dynamically provisioned cloud hosting. Just a good quality, reasonably priced, solid dedicated box with a high quality, reliable hosting company. Not glamourous, just smart.

Cache it all the way

I’ve become a massive fan of Varnish of late. It’s an HTTP cache (or reverse proxy) that sits on port 80 in front of your web server. If the web server’s response is cachable, it keeps a copy in memory (by default) and serves it up the next time that same page is requested. Done right, it can dramatically reduce the number of requests hitting your backend web server, whilst serving precompiled pages super-fast from memory.

Good use of Varnish can make your site much faster, however, it is no silver bullet. The caveat “if the web server’s response is cachable” turns out to be a very important one. You really need to design your site from the ground up to use a front end cache in order to make the best use of it.

As soon as you’ve identified the user with a cookie (including something like a PHP session, which of course uses cookies) then the request will hit your backend web server. Unless configured otherwise (as we have) that would include things like Google Analytics cookies, which of course, would be every request from any JavaScript-enabled browser. If you static assets (images, CSS, JavaScript) from the same domain, by default the cache will be blown on those, too, as soon as a cookie is set. So you have to design for that.

So while Varnish will help to take the load and shorten response times on common pages like your site’s front page, you can’t rely on it as an end-all solution for speeding up a slow site. If your backend app is slow, your site will still be slow for a lot of requests.

It’s a bit like putting WP Super Cache on a WordPress site. It will mask the underlying issue to an extent, but it won’t solve the underlying problem.

Your CMS or app has to be fast

The Greenbelt site runs on a custom CMS. The details of why (people always ask, as if it were heretical for developers to, you know, write their own code) are probably best saved for another post.

When developing the CMS, I set a target time for each page to be compiled, and had the code time itself and output the result at the bottom of the page. Working locally, on a MacBook Pro, obviously the build times would be significantly slower than the production web server, but the key is relative speed between pages. On my dev system, I wanted to have a regular page build in less than 0.01 seconds, and only to go above that if absolutely necessary for complex pages. The front page – an important one for speed – builds in around 0.003 seconds.

My constantly outputting the build time, I was able to keep track of the implications of every bit of code as I was writing it – which is absolutely the best time to fix any issues.

The general approach is the same as taken in Perch – do as much of the work at edit time as possible. When an author writes a blog post using Textile markup, we translate it to HTML and store it that way too. When a content-based page is published, we compile it against its templates, and store a copy of each region as HTML. At runtime, we just perform a simple query to retrieve the precompiled parts and assemble them into an otherwise dynamic page.

Anything that happens at runtime that is expensive to produce and doesn’t need to be bang-up-to-date gets cached for at least an hour. That includes things like search facet displays from Solr, the latest tweet from Twitter, blog post listings. If the result of an action is likely to be the same the next time it’s performed, do it once and cache it for a while. As I said, cache it all the way.

Optimise the front end

The majority of the time between the user requesting a page and it finishing loading is spent not at the server, but in the web browser. Entire books have been written about optimising the load time of your pages. All I can say is read them and implement all the advice that applies to you. It’s not ultra-nerdery for bored front end engineers, this stuff actually works.

Some key tools I found useful were Google Page Speed for monitoring testing, and the Network panel in webkit’s Web Inspector tools. I used Dustin’s script.js for asynchronous JavaScript loading, which I found to be much faster in practise than Steve Souders ControlJS, although not without a few bugs in older IEs.

I combined most of my JavaScript and minified it using YUI Compressor as a build option on the server. I found that the gains from minifying CSS (just a few kilobytes) weren’t worth the loss of line numbers when debugging.

All the site’s images are managed by a new Media Management System, which I’ll write about another time. Those are all served from subdomains (m1 – m4.greenbelt.org.uk) and are handled by nginx rather than the Apache 2 server that handles the PHP page requests. The routing of the requests to different backend servers is handled in the Varnish configuration.

Other shared static resources (like a copy of jQuery and script.js itself) are served from another subdomain, again through nginx and Varnish.

Why the subdomains? Despite the requests ending up on the same server, the subdomains help increase the number of resources the browser will download in parallel, as browsers limit on a per-domain basis. I may have gone a bit overboard on the subdomains, truth by told, but this one site is part of a larger system of sites and apps, and it serves a broader purpose.

Depending on the width of your browser window, Page Speed ranks the front page at around 92%. The points are docked (and change) due to the ‘scaled images’ rule. The rule says you shouldn’t have larger images that are scaled down in your HTML. Instead you should scale the images first and display them at 100%. As this site has a responsive layout, the images scale to fit at any window size, so that rule is a red herring in this case.

Follow the rules as best you can, but remember it’s fine to ignore ones that simply don’t apply.

To conclude

That was a lot of words to explain what I hope is a simple point. There’s no silver bullet to making a slow site fast. You must take a holistic approach. High performance runs the entire way through from the hardware it’s hosted on, through the app that builds the pages, to the server software that delivers the pages and the front end code that displays them in a browser. Speed is a feature that you must design, not just a bit of configuration done at the end.

- Drew McLellan

Comments

  1. § Marcus Greenwood:

    Hi Drew,

    Thanks for this. Really excellent advice and great job with the Greenbelt website. I would like to emphasise the points about combining your scripts and stylesheets. This is really easy to do for (almost) any site and makes a ton of difference immediately. Adding additional subdomains to allow parallel downloading is also a nice and easy one but in my experience, this is more of an art than a science. Be prepared to do lots of testing and monitoring to work out the best configuration.

    Another simple tip I would add: Make sure your domain name TTL settings are NOT 0 (or “automatic” or “default” for that matter) – surprisingly this is the standard setting for many DNS providers. This causes a user’s browser to query the DNS for every single HTTP request which can contribute 100-200ms to every request, sometimes even for cached resources.

    Finally, Google Page Speed, YSlow etc are great, but these avoid one key measurement – the overall page load time and how this compares to other websites. 2 other tools I regularly use that give a more holistic and true sense of page load speed are the Pingdom Load Time Tester and WebPageTest. See here:
    http://tools.pingdom.com/fpt/#!/jn96SlRsr/www.greenbelt.org.uk
    http://www.webpagetest.org/result/120316_K8_3KVYT/

    cheers
    Marcus

  2. § Moodh:

    About minimizing the CSS, why not? Simply have a normal one in your dev or stage environment, while minimizing on the live site. Enables less data to be sent while keeping the debugging stuff when you develop.

  3. § Drew McLellan:

    Moodh – it just didn’t save enough to be worth it. All you can really do with CSS is strip out comments (which we do already with our preprocessor) and minimise whitespace. The latter really doesn’t make much difference – <1k on most of our files.

  4. § Moodh:

    Drew: You could remove empty rulesets, unused rulesets, replace 0px with 0, remove the last ;, merge margin-left, margin-right etc to margin and so on, I’d say that every kb is worth it in a production environment. For our pages there’s up to 5kb gain on each minified CSS file. :)

    New users won’t have anything cached, so naturally their first load will be a huge timesink compared to the following. Doing everything possible to enhance the speed for new visitors increases the conversion rate from new users to returning users.

  5. § Florian Schroiff:

    Very informative!
    Not 100% sure what you are saying under “Cache it all the way”, that section is a bit vague. Are you saying that unless you serve your static assets form a different domain using Google Analytics will break your cache?

  6. § Ted Goas:

    I really enjoyed reading this, Drew.

    I especially enjoyed reading your bit about web hosting. So many of the #webperf articles I see focus on the front-end and server settings (which are important, yes). But to your point, all the work can be done in vain if it’s placed on a slow server.

    I also had no idea the Perch has such strong roots in site performance.

    Thanks again!

  7. § Adrian Westlake:

    Good article Drew.

    It’s easy to ignore site performance when ploughing lots of pretty CSS3 and JQuery into your site. I am currently working on a team rearchitecting a site which gets 5 million visitors a month. Speed is not only good for user experience, but it’s good commercially. Every millisecond extra translates directly into profit. One useful thing to note is that older browsers will be a lot slower. IE7 can only download 2 parallel items at once, as has a much slower rendering engine than the latest Chrome for example. If it’s quick on IE7, then it should be super fast on Chrome.

    While Page Speed does give you some good guidance, don’t bee too hooked up on the numbers. What I find a much better measurement is page loading times. Google Analytics has the measurement, and you can get load times in Firebug and Web Inspector.

    Remember not everyone has a superfast development machine like you.

  8. § Josh:

    Thanks Drew, lots of great tips and reading recommendations here. As a Perch user and appreciator, I’m always curious to read about your thought process.

  9. § Harsh:

    Great Article Drew, I enjoyed the article. You have included some great tips here with sincerity. The front-end optimization recommendations are really helpful.

  10. § David M:

    @Florian Schroiff
    I believe what he was saying is that if your webpage has something that creates a cookie for each user (like Google Analytics code, or PHP Sessions in your web app) it will inflate the size of the each resource request header sent by the browser.

    Why? Because the cookie data is in every HTTP header. And worse, that cookie data has to travel in both directions. The typical home users have an upload speed that is much-much slower then download. Any extra data they need to send up to your site is going to slow them down to.

    For example if your cookie is small (100 bytes) and the browser needs to request 100 static items… that’s an extra 100’000 bytes of extra bandwidth that is most often unnecessary. All this wasteful use of bandwidth COULD negate any potential gains from caching.

    But all of this depends on factors like: number of cookies your site sets, the size of each cookie, how many static resource request the browser needs to send to load the page, etc….

    Google recommends putting static content on a separate ‘cookie-less’ domain. This could also be accomplished with sub-domains (Someone please correct me if I’m wrong). But be sure your cookie(s) isn’t set for the root domain: eg. set the cookie to http://www.example.com…. not http://example.com.

    https://developers.google.com/speed/docs/best-practices/request#ServeFromCookielessDomain

  11. § Jonny N:

    Great to hear about how you’ve developed the new greenbelt site .. I’d also be really interested in:

    - what tools you’ve used to do the ‘responsive’-ness, i.e. grids, less/scss(?), things like that

    - what ‘build’ tasks you’re using to fix up your front-end resources on the server side (apart from YUI compressor)

    J.

Photographs

Work With Me

edgeofmyseat.com logo

At edgeofmyseat.com we build custom content management systems, ecommerce solutions and develop web apps.

Follow me

Affiliation

  • Web Standards Project
  • Britpack
  • 24 ways

Perch - a really little cms

About Drew McLellan

Photo of Drew McLellan

Drew McLellan (@drewm) has been hacking on the web since around 1996 following an unfortunate incident with a margarine tub. Since then he’s spread himself between both front- and back-end development projects, and now is Director and Senior Web Developer at edgeofmyseat.com in Maidenhead, UK (GEO: 51.5217, -0.7177). Prior to this, Drew was a Web Developer for Yahoo!, and before that primarily worked as a technical lead within design and branding agencies for clients such as Nissan, Goodyear Dunlop, Siemens/Bosch, Cadburys, ICI Dulux and Virgin.net. Somewhere along the way, Drew managed to get himself embroiled with Dreamweaver and was made an early Macromedia Evangelist for that product. This lead to book deals, public appearances, fame, glory, and his eventual downfall.

Picking himself up again, Drew is now a strong advocate for best practises, and stood as Group Lead for The Web Standards Project 2006-08. He has had articles published by A List Apart, Adobe, and O’Reilly Media’s XML.com, mostly due to mistaken identity. Drew is a proponent of the lower-case semantic web, and is currently expending energies in the direction of the microformats movement, with particular interests in making parsers an off-the-shelf commodity and developing simple UI conventions. He writes here at all in the head and, with a little help from his friends, at 24 ways.