All in the <head>

– Ponderings & code by Drew McLellan –

– Live from The Internets since 2003 –

About

XMLHttpRequest for The Masses

12 December 2004

With the advent of Google Suggest it seems that the industry has deemed that client-side XML HTTP is ready for the prime time. The technology is nothing new, of course, and has been part of every server-side developer’s standard toolkit for years, but whilst some browsers have maintained support for XML HTTP for a few years, it’s only recently that support has been widespread enough to utilise.

Interestingly enough, the XMLHttpRequest is not part of any public standard. The W3C DOM Level 3 ‘Load and Save’ spec covers similar ground, but you know how long these things take to get implemented. At the time of writing, if you need to use XML HTTP from a user agent, then the XMLHttpRequest object is the only way you can do it.

So what is XML HTTP?

The idea itself is very simple. By using JavaScript, a web page can make requests to a web server and get responses in the background. The user stays on the same page, and generally has no idea that script running on the page might be requesting pages (using GET) or sending data (using POST) off to a server behind the scenes.

This is useful because it enables a web developer to change a page with data from the server after the page has already been sent to the browser. In a nutshell, this means that the page can change based on user input, without having to have pre-empted and pre-loaded that data when the page was generated.

Example: Google Suggest. There’s no way Google can have any idea what you might be about to search for, so when you start typing, JavaScript in the page sends the letters off to the server and gets back a list of suggestions. If the JavaScript wasn’t able to talk to the server, the page would have to have been created by the server initially to hold every single search term you might type – which would obviously be impractical!

So what can we use this for?

Obviously, the technology has a place in creating a better use experience in terms of user interface. That is not its primary use, however. Its primary use is reducing load on the server.

That may sound mad considering that, in the example of Google Suggest, an additional request is being made to the server with every letter you type. Surely that increases the load by a magnitude, no? Well, no. Each ‘suggestion’ list served by Google is a very inexpensive list to produce. It’s not particularly time sensitive, and it’s not computing anything to get you there – it’s simply giving you a snapshot of a search phrase list sorted alphabetically and then by PageRank (or something similar, but it’s just a list).

Consider what happens if you don’t make a selection from the list and you keep typing. You perform your search and get back 6 million results – no problems there. But if you made a typo, Google’s just had to retrieve 6 million results that you have no need for. You retype and get the 6 million results you’d originally hoped for.

Now consider what happens when you do pick a selection from the list. Well, first thing is that the list doesn’t contain typos, so that problem has been eliminated straight off. Presuming that you clicked on the item you intended to select, you get a perfect set of results straight off. More importantly, however, is that you get a set of results that Google’s already got. Because you pick from the list, there’s no need to reperform the search.

An example

Take the example of searching for information on Britney Spears’ undergarments. I might think of searching on “Britney Spears knickers”, but if I see “Britney Spears panties” in the list, then I’m going to go ahead and select that instead. If Google already have a search for “Britney Spears panties” cached (and believe me, they do) then the reduction on load on the server for retrieving that search vs performing a new search on the uncached “Britney Spears knickers” is significant.

Side note: I checked both of the above search terms, and yes, they both appear in the list. I’d like to say I’m shocked, but amused will have to suffice. Interestingly, this proves the suggestions aren’t more than a simple list lookup (i.e. they’re not intelligent), as a search for “Margaret Thatcher panties” yields no such suggestions.

So, another example on how this technique reduces load. Say that you have a couple of select lists on your page, and in a drill-down style the user’s selection in the first list determines the options available in the second. There are two traditional ways to do this. The simple method, if the number of permutations for the second list isn’t too great, is to pre-load all the options as arrays when the page is built. Of course, for many applications this simply isn’t practical, as either the permutations are too great or are too lengthy to preload into the page at build time. In this case the second option is to have JavaScript post the entire page back to the server after the first selection is made, so that on the reload the server can build the second list with the appropriate options.

Now, this practise, especially the second option, doesn’t sound too heavy on the server. Consider the case, however, where the page containing the two select lists holds a much larger form with all sorts of data that has to be retrieved from a database and processed for display. Some of the other data might contain complex calculations or enormous queries that are very expensive to run. Consider also the amount of work involved in a simple post-back of a large form. All the data has to be re-read from the post and written back into the form in case the user is part-way through completing.

By utilising XML HTTP to fetch the options for the second list behind the scenes, you not only make the experience a little more slick for the user (no page reloads), but you also reduce the load on the server as it doesn’t have to rebuild that page.

Unifying front- and back-end processing

Another neat trick you can perform using this technology is using the server to perform any tricky processes that have until now been left to the client.

Take the example of input validation. In modern web apps, all validation of user input is typically performed twice – once on the client with JavaScript, and once at the server in case the client-side process failed. What this means is that the validation routines have to be written twice and maintained in two places. That’s ok if it’s just a case of checking the user has entered their surname, but if the process is any more complex than that (consider evil numbers with checksums etc), you’re into writing two bits of code to do the same thing.

If you use XML HTTP to tackle this problem, any complex validations with their checksums and wotnot can be posted off to the server during the validation routine, and the server can check it using its own process (the same processes it will re-check it with in a fraction of a second’s time) and spit back a result. And that, as they say, is magic.

So what does it all mean?

Well, now that Google are at it, every bugger’s gunna want it. We should prepare ourselves for an onslaught of badly implemented assisted searches with filthy client-side code the likes of which have not been seen since some utter twit figured it’d be a nice idea to create drop-down navigation using JavaScript. If it’s not on DynamicDrive already, give it a few days.

For us, it means that we should be reading up, trying it out and adding the technique to our ever-expanding toolbox o’tricks. XML HTTP is a useful device and certainly one that it pays to be aware of, especially when it comes to reducing the amount of work your servers have to do. What it’s not is some big revolution that’s going to change the way we build web apps. It will help us build better ones, but you may never notice.

- Drew McLellan

Comments

  1. § Chris Vincent: I just posted an article on this last night. I predict that it won’t be long until we start seeing this technique used in complex web applications, where traditionally every manipulation of data requires a page refresh. There are benefits for both user experience and bandwidth with this new way of doing things.
  2. § Colly: Prepare ourselves for an onslaught of badly implemented assisted searches? I take it you’ve used my Live Search then, Drew!

    Anyway, good overview. I’m also enjoying the rise in popularity of the word “twit”. Excellent.
  3. § andrew: I really enjoyed reading this; I hadn’t really thought about XMLHttpRequest from that perspective before. Now to get my super mad wicked fresh auto-suggest-for-anything-typed-anywhere plugin started.
  4. § Gavin Terrill: Its primary use is reducing load on the server

    I’ve been using this technique for quite a while to populate drop down lists based on other selections on the page (as you discussed), so I would say this technology primarily facilitates dynamically loading portions of a page in the background.

    Have to disagree about the performance aspect though. The performance difference between getting a list or computing a result on the back end for someone like Google would be relatively trivial, whereas the overall load (including network infrastructure) will increase due to the extra volume of requests and network traffic you are generating.
  5. § Anne: XMLHttpRequest is part of the SVG 1.2 specification if I’m not mistaken. (Don’t ask me why they put it there, it sucks, I know.) WHATWG has also embedded it in a draft: 10.2. Scripted HTTP: XMLHttpRequest.
  6. § Jeff Minard: Yeah, this XMLHttpRequest things is pretty awesome .
  7. § Dave Child: Unfortunately, the XML HTTP Request object isn’t supported by Opera. I’ve got a similar system at work on my site though, providing “live” search results, that doesn’t use the XML HTTP Request object, so giving wider support.
  8. § Krijn: Luckily this will also be supported in Opera’s next version
  9. § Turnip: Has anyone thought about the implications this may have on our privacy? This technology gives client-side applications the ability to interact with a server without the page being reloaded. I don’t doubt that the less, shall we say, “kind” ad vendors will start to exploit this in various ways to churn off even more graphs and statistics about who clicked/hovered/looked at what, when, and which ads are most effective.

    I’m not saying this is a bad technology (it clearly has a lot of great potential), I’m just wary about how it will be used.
  10. § Bryan: I’ve been using this with a combination of Textile for all the back end systems here at the firm. Fewer questions from lawyers when they can see what they are typing formatted in real time in front of them.
  11. § Jeff Minard: Turnip : Possibly – but those kinds of things could possibly have already been done. Collect the data while the person is on the page, when the click on a link, intercept the onclick and add the collected information as a GET item to the URL. Same results – slightly more obvious.

    I think that the benefits will far outweight the pitfalls.
  12. § Max Milan: new user experience trough XMLHttpRequest? try this one: http://map.search.ch (click to zoom, then drag)
  13. § ghola: I am also concerned about anything that goes on behind my back, so my first question, knowing (almost) nothing about XMLHttpRequest but what I read here, is: Can I turn it off? Do I have to turn off Javascript altogether?

    Of course the possibilities seem very interesting, but I’m paranoid. If anyone needs me I’ll be searching the web for more info for the next few hours.
  14. § Jason G: GOOGLE LIES! This page has nothing to do with Britney Spear’s underwear!

    Actually, since I have seen Google Suggest, I have been thinking of ways I could use this at work in an accessible manner. Meaning, I don’t want to give the users with Javascript enabled so much more than those without it that it would diminish the usefulness/purpose of the application.

    I see great potential for making more rich web applications using this.
  15. § Drew McLellan: Jason – as always, it depends on your audience. The greatest benefits are going to be seen for web apps where it’s reasonable to specify JavaScript as a requirement. On a public-facing web site, it may not so much of a possibility as with, to take an example from my current project, a web-based software product.

    I’d like to see developers taking all the advantages of Macromedia’s RIA concept and actually making it work with standard, usable technologies.
  16. § Marcus Tucker: Great post Drew, insightful as always. Interesting links in people’s comments too.

    :)
  17. § MH: “I’d like to see developers taking all the advantages of Macromedia’s RIA concept and actually making it work with standard, usable technologies.”

    Hear, hear!
  18. § John Dowdell: >> “I’d like to see developers taking all the advantages
    >> of Macromedia’s RIA concept and actually making
    >> it work with standard, usable technologies.”
    >
    > Hear, hear!

    Well, you’re welcome to do so… the more good sites with varied technologies, the better. Go for it! What project do you have in mind…?

    Regards,
    John Dowdell
    Macromedia Support
  19. § Marc: I haven’t been using XMLHttpRequest for anything USEFUL, but I’ve been using it for a couple years (originally starting w/ ASPTear, a system dll that does the same thing) to make my on selfish life easier…

    I have a dozen local web pages set up to pull content (HTML scraping?) from other sites so I can read everything in one place. Basically, I have made my own huge comics pages (about 120 comics from different sources). It’s cheating, I know, but it started out as a real project and just kind of degenerated into selfish gratification.
    I know that XMLHttpRequest has practical uses, but it’s also tons of fun for hacks…
  20. § Seth House: Jason, Drew,
    I also would like to see more about this and accessibility implications. Just when it was starting to look like accessibility was gaining popularity and JavaScript menus were losing it. Doh!
    Most designers will ignore intended audience and use it for the wow factor, I’m assuming.
  21. § MG: So is this what Dunstan Orchard is doing on his website (http://1976design.com/blog/) where he does the "LiveSearch: search the archives"? I have always thought this was so cool.
  22. § David Schontzler: XMLHttpRequest is part of the SVG 1.2 specification if I’m not mistaken. (Don’t ask me why they put it there, it sucks, I know.)

    The SVG group seems to be the only one trying to improve things, so they are the ones on top of things enough to put it in at least some spec.
  23. § Jesse Sherlock: To those of you concerned over privacy or “things going on in the background”, that’s a valid concern, however that kind of javascript has been possible long before XMLHttpRequest, as aluded to in the beginning of the article. I’ve been coding webapps with XMLHttpRequest-like features for quite a while, as have many others. Just use a hidden iframe, “refresh” it in the background with a new source, have the server return some XML, read it out of the iframe. Referred to commonly AFAIK as remote scripting Very hacky compared to XMLHttpRequest, which is (will be?) a godsend to web application developpers, but it works.

    That being said, I’m sad that google has stolen my thunder. No more wowing customers (“Hey, how’d you do that without a page refresh, that’s slick, I love it, here’s a big fat cheque”)
  24. § M.J.Milicevic: “I’d like to see developers taking all the advantages of Macromedia’s RIA concept and actually making it work with standard, usable technologies.

    Well, at backbase we are doing just that. e.g check for example sourcecode of:
    http://www.heinekenmusic.com/
    -m
  25. § Bill Brown: I’ve just used this in an application. From what I can see, it doesn’t work in IE5/Mac. Other than that, it’s great.
  26. § Mad: I should hope “Margaret Thatchers panties” returned no result! shudder
  27. § M. Schopman: I am currently finishing up a ColdFusion based CMS, with a full RIA interface featuring immense use of xmlHttp for DHTML widgets and background operations. It took a long time before people started to realize its potential, this technique is available for over a year now.

    Still 99.999% of web applications reload entire datasets into html tables if only one row or cell has changed, xmlHttp has given me the ability to do only that row, instead of reloading the entire page again.
  28. § Ashley Portman: Pretty good. Thanks for the great post… I found it very helpful, as always.
  29. § John Dowdell: > http://www.heinekenmusic.com/

    Hmm… the front page says, “This broadband site is optimised for IE 6 for Windows and Mozilla 1.4 for Mac,” yet the inside seems to use four or five SWFs. Is there special JavaScript which limits the audience, or…?

    (btw, anyone have a matrix of the current levels of XMLHttp support across various browser brand/version/platform mixes?)

    jd/mm
  30. § Eric: Great post, and I was pleasantly surprised that many of my favourite site already use it, ie binary bonsai. Can’t wait to use it someday.
  31. § Sarath Chandra: A good article, these days i am seeing more and more articles on using this technique.

    Like the first comment, we already use this feature in our Enterprise Web Applications to provide a wonder ful user experience, where users get a VB (or a client server app) kind of experience without the normal page refreshes.

    Previously it used to work only on IE, but it is good to note that FireFox has implemented capability for XMLHttp Requests.
  32. § Jim: The W3C DOM Level 3 ‘Load and Save’ spec covers similar ground, but you know how long these things take to get implemented. At the time of writing, if you need to use XML HTTP from a user agent, then the XMLHttpRequest object is the only way you can do it.

    This is incorrect. The latest Firefox supports DOM 3 Load and Save (not sure how long that’s been there), and so do the latest Opera betas. Konqueror also seems to have an (incomplete) implementation.
  33. § Drew McLellan: Jim – that’s useful information. Thanks for contributing.
  34. § Chris: How does this interact with screenreaders (such as JAWS) that are used by blind users? Refreshing a small bit of text in the middle of a page might not be detectable by assistive technology and therefore not accessible.
  35. § Q: I know I’m probably missing it, but how is this more advantageous than using the old hidden frame trick to make an http request, then parsing out the resulting innerHTML? Is there something XMLish that we can utilize in Javascript to increase processing speed?
  36. § MH: I’ve just recently (since this article was posted) implemented this for a component of our web app at work. Definitely not a “wow-factor” application of the technique, but a serious usability-booster for a page that needs realtime updating without flicker. :)
  37. § MH: Q:

    It’s very similar to the hidden frame method, but much more flexible. You have access to status codes, header information, and the response as either XML or text. I may be wrong, but I don’t think you can get all that with the hidden frame method.
  38. § nitestorm: I have been using this technique for about 3 months now. I use it in every app I build now, for almost everything. Login pages, counters, loading arrays, chat… anything.

    it’s a magic :-)

    the speed is incredible, compared with page refreshes. ALso I can get back only the “value” like 0 or 1 wich in it’s self makes everyhitng much much faster.
  39. § design: I was looking for an example and I found it,
    great, http://www.google.com/webhp?complete=1

    Thanks for an excellent articles.
  40. § Habazard: Unfortunately in using XMLHttpRequest you will have to do a browser detect to determine how to create the object. However, I just implemented a real time behind the scenes UPS shipping calculator on a shopping cart page.
  41. § Drew McLellan: Habazard – it’s not necessary to browser detect, you can simply test for support of each particular object.
  42. § mcharper: Used this on my client’s intranet cos they only use IE. I like the way it makes an app feel more like a traditional client/server app, reducing page refreshes. I first discovered it in an MSDN article about “one-page web applications” here
  43. § mcharper: Sorry, that article is here”.
  44. § Bob Sawyer: Just ran across this article after doing a search for XMLHttpRequest. I just implemented a series of dropdown menus that are populated from our database using this method. Seems to work everywhere except for Mac IE5. More here.
  45. § Graham: I wish to pose an interesting question.

    XMLHttpRequest is great and all, but I challenge somebody to this problem I have been experiencing:

    Create an XMLHttpRequest that populates it’s data into a html list in javascript then say, change the selectedIndex to “2” underneath it, it won’t let you. It will change, but then default back to -1 by itself, I believe when it goes to make the open call, and then select the entry in the drop down list, it doesn’t realise the select list has been populated yet therefore selecting null. If i add a simple alert(‘test’); above this line, allowing everything to populate and waste a few seconds, it would then populate when i hit ok. If anybody DOES figure out a solution to this, could you please e-mail me on graham at resonline dot com dot au
  46. § p: Be careful with this one.

    Anyone who has worked with Microsoft’s special style filters might have come across this: If a user disables Active X components, Microsoft’s style filters don’t work.

    It looks like the XMLHttpRequest object is part of the Active X objects too. I’d strongly recommend testing web applications that are going to use the XMLHttpRequest object, with Active X disabled. I believe anything higher than (and possibly including) Medium security in MSIE will block Active X components.

    It’s been a while since I used MSIE so I can’t recall the specific settings.
  47. § mcharper: I’ve done some quick tests for this, results here
    (sorry, thought you had trackback)
  48. § Danny: I was just futzing around and bumped into your article.

    I’ve actually been using this method since about April ‘04. In fact – as Sr. Developer at my previous co. I mandated it’s usage.

    I think that intensive web applications not using the XML Request are intrisically less capable of massive adoption without seeing massive overhead. I’ve done some benchmarking, and concluded that in one of our applications (roughly 28,000 unique users per hour) we were able to decrease server load, and bandwidth by over 87%. That is STAGGERING! We figured that had we used traditional methods, we would have needed a least one more server to handle the load, and our client would have been paying roughly 8k a month in bandwidth costs…

    Since that time I’ve used it in thin client’s for network hardware devices, hundreds of sites, dozens of applications… I thank God every day for it…
  49. § Shane Witbeck: I have put together an AJAX informational web site.
  50. § Arun Kumar: Does XMLHTTPRequest work the same way in all browsers and is the syntax and method, function names same in all browsers.

    Sometime back I had trouble getting a keystroke on a page. In IE its done with event.KeyCode and for mozilla class of browsers, it is Keystroke.which.

    Would I have similar problems like these with XMLhttpRequest??

    Regards
    Arun Kumar
  51. § Doug: These guys have taken it to the extreme—
    http://www.backbase.com/

    Nice components.
  52. § Laurens Holst: Arun Kumar: yes. Although to a lesser degree.
  53. § Rodrigo: the code on the following page will allow everyone to use the xmlhttprequest on opera < 7.60(beta).
    It’s an implemention of the object in javascript & java.

    http://www.scss.com.au/family/andrew/webdesign/xmlhttprequest/

    regards,

    Rodrigo
  54. § Xuan: Just started getting in this tech. It does seem very neat indeed. Just wondering if anyone knows how to implement this with an array?
    Cheers,
  55. § Graham: Ever since I wrote a post on this website I have been receiving spam, I actually realised this by doing a search on google for my email address. Is there anyway somebody could remove my previous post please?
    Although i fear it may be too late :(
  56. § Drew: Graham – I’ve obscured you email address so that spambots should keep picking it up. Have you learned your lesson? :)
  57. § Graham: Yes I have learnt the error of my ways! I will never do it again sobs

    hehe, thanks for that :)
  58. § M@: I have done a lot of work with XML web services and am seeing that this would be the perfect compliment to the kind of work I am doing. I have been disappointed with Microsoft’s ASP.Net development environment. They have (in my opinion) resorted to attempting to dumb down the developer by having them build forms and letting the engine render the HTML, and to make things worse they use pure hacks to imlement session state and interactivity with the ViewState and PostBack concepts.

    With XMLHTTPRequest on the client side and XML web services on the server side developers will finally have a straightforward method for consuming web services that doesn’t require a any additional controls on the client side. I think this is going to be a big deal. Well, if I have a say it will be anyway.

    Great article.

    M@
  59. § costy:

    I really enjoyed reading this; I hadn’t really thought about XMLHttpRequest from that perspective before. Now I’m codind an auction script using this technology.

Photographs

Work With Me

edgeofmyseat.com logo

At edgeofmyseat.com we build custom content management systems, ecommerce solutions and develop web apps.

Follow me

Affiliation

  • Web Standards Project
  • Britpack
  • 24 ways

Perch - a really little cms

About Drew McLellan

Photo of Drew McLellan

Drew McLellan (@drewm) has been hacking on the web since around 1996 following an unfortunate incident with a margarine tub. Since then he’s spread himself between both front- and back-end development projects, and now is Director and Senior Web Developer at edgeofmyseat.com in Maidenhead, UK (GEO: 51.5217, -0.7177). Prior to this, Drew was a Web Developer for Yahoo!, and before that primarily worked as a technical lead within design and branding agencies for clients such as Nissan, Goodyear Dunlop, Siemens/Bosch, Cadburys, ICI Dulux and Virgin.net. Somewhere along the way, Drew managed to get himself embroiled with Dreamweaver and was made an early Macromedia Evangelist for that product. This lead to book deals, public appearances, fame, glory, and his eventual downfall.

Picking himself up again, Drew is now a strong advocate for best practises, and stood as Group Lead for The Web Standards Project 2006-08. He has had articles published by A List Apart, Adobe, and O’Reilly Media’s XML.com, mostly due to mistaken identity. Drew is a proponent of the lower-case semantic web, and is currently expending energies in the direction of the microformats movement, with particular interests in making parsers an off-the-shelf commodity and developing simple UI conventions. He writes here at all in the head and, with a little help from his friends, at 24 ways.