All in the <head>

– Ponderings & code by Drew McLellan –

– Live from The Internets since 2003 –


JSON All The Way

10 August 2006

I’m increasingly coming around to the realisation that JSON is pretty much the best way to consume external data within JavaScript. If you’re providing a web service or API that’s returning data in XML format, you really need to start offering a JSON output option if you want to encourage use of your service. It’s becoming essential.

Parsing XML in JavaScript is awkward and unpleasant. There are chunky inconsistancies between implementations in different browsers, and whilst you can abstract this away with a library (anyone know a good one for XML?) you can save yourself untold amounts of hassle using JSON as it is inherently native to the language.

Brief aside: if you’re not familiar, JavaScript Object Notation is a method of describing data structures such as arrays and objects and their contents in plain text. On receiving a chunk of JSON you can eval() it to recreate the data structure within your script – the objects become live objects and the arrays become arrays that can be read and written, and so on. You can read more about JSON.

This morning I spent around about an hour working with an API which, as far as I could see, only returned XML. Previously, I’d done quite a bit of work with Ajax and XML and so am pretty familiar with handling it, at least on a manual XHR level. In this situation, however, I was needing to use the YUI libraries for the first time. For one reason or another, the response back form the server was an XML document as plain text, which I then needed to load into a DOM document. I’m not sure if it was that the YUI Ajax stuff doesn’t support DOM documents in some way, or whether it because the response was text/plain rather than text/xml, but either way I was needing to create a document and load it up form a string. Turns out this is one of those things that’s not only different across browsers’ XML implementations, but in some cases isn’t even complete.

After working myself into a frustrated mess involving DOM documents that looked like they were loaded but in truth were only sort of a bit loaded, I turned back to the API documentation to double check that there wasn’t a JSON option. The docs had nothing, so I had a go and querystring hacking until I found that yes, indeed, it did support JSON, it was just that some nincompoop had neglected to document it. I ripped out my browser-specific XML hacks and eval()ed the JSON instead, and I was up and running with data in the page in literally a matter of minutes.

JSON is simple, effective and robust. It’s worth saying again:- if you want people to hack on your APIs, roll out JSON support.

- Drew McLellan


  1. § Nick Fitzsimons:

    It’s also surprisingly easy to take existing XML and transform it to JSON using XSLT with the output method set to “text”. The server can simply take the existing XML and, instead of serialising it to the response, run it through the transformation and serialise the results of that. If it’s nice XML (i.e. not the kind that has comma-separated values in text nodes and suchlike shenanigans, and where element and attribute names are JavaScript-friendly) then a very simple XSLT file could work for every XML source on the site.

    Hmm, maybe that should be my presentation at BarCamp London next month…

  2. § Drew McLellan:

    That’s a great point, Nick, and I think it would make a really interesting presentation for BarCamp London.

  3. § Nick Fitzsimons:

    I’ve been dithering over three other ideas for BarCamp, but I reckon I’ll give this one a go. Come to think of it, I was discussing this very topic with a few people after the WSG meetup last month, so maybe there’s some demand :-)

  4. § Dustin Diaz:

    I’ve even gone as far as to hit up a proxy file between xml files and my callback using simplexml to receive json objects. It’s just easier to get the data you need that way.

  5. § Chris Heilmann:

    There’s a rather cool converter from XML to JSON: Badgerfish.

    The only danger of JSON is when you don’t filter for data exclusively and cannot trust the source (Yahoo is pretty safe). You can use Doug Crockford’s JSON parser though to avoid that problem.

  6. § Chris Winters:

    I think introducing E4X would help a lot with this—it seems to bridge the gap nicely.

    E4X Spec

    On wikipedia

  7. § Bram Stein:

    I’ve written such an XSLT file as described by Nick Fitzsimons some months ago. You can find it on my website:
    It only works with XSLT 2 though. If you need an XSLT 1, have a look at Alan Lewis’s stylesheet:

  8. § Brian:

    I used to be in the JSON camp but now that I have been getting down to the nitty gritty of my AJAX framework, its looking like JSON means well in data transport but just isn’t as robust as XML when you get down to it. The beauty of XML is that you can define rules as to exactly how the data must be put together and parsed using a DTD. Embedding HTML or just various text into a JSON object is a pain because you have to ensure that everything is properly escaped, and if not, have fun debugging eval’d code. With XML, it’s as simple as defining the data within any given element as non-parsable data. Also, you ensure data integrity by validating against a DTD. The sad part is that as of now, IE is the only browser capable of validating on the fly.

    In the end though, they both have their place. JSON is suitable if you want to whip out a nice and simple project, but I think for more robust solutions XML is the way to go.


Work With Me logo

At we build custom content management systems, ecommerce solutions and develop web apps.

Follow me


  • Web Standards Project
  • Britpack
  • 24 ways

Perch - a really little cms

About Drew McLellan

Photo of Drew McLellan

Drew McLellan (@drewm) has been hacking on the web since around 1996 following an unfortunate incident with a margarine tub. Since then he’s spread himself between both front- and back-end development projects, and now is Director and Senior Web Developer at in Maidenhead, UK (GEO: 51.5217, -0.7177). Prior to this, Drew was a Web Developer for Yahoo!, and before that primarily worked as a technical lead within design and branding agencies for clients such as Nissan, Goodyear Dunlop, Siemens/Bosch, Cadburys, ICI Dulux and Somewhere along the way, Drew managed to get himself embroiled with Dreamweaver and was made an early Macromedia Evangelist for that product. This lead to book deals, public appearances, fame, glory, and his eventual downfall.

Picking himself up again, Drew is now a strong advocate for best practises, and stood as Group Lead for The Web Standards Project 2006-08. He has had articles published by A List Apart, Adobe, and O’Reilly Media’s, mostly due to mistaken identity. Drew is a proponent of the lower-case semantic web, and is currently expending energies in the direction of the microformats movement, with particular interests in making parsers an off-the-shelf commodity and developing simple UI conventions. He writes here at all in the head and, with a little help from his friends, at 24 ways.