Monthly Archives: November 2005

Web 2.0 and the death of the browser (as we know it)

Remember the BMW Isetta? It was basically a motorcycle trying to be a car, with a miniature car-like body, but with a single cylinder motorcycle engine. These days, when working on a web site, I somehow feel like I’m living in an Isetta world, like I’m being asked to design cars with motorcycle parts, asked to turn web sites existing within the confines of a browser universe into all-out desktop applications. Remembering the good old days when web sites dressed and looked like websites (as in more content and less functionality), today’s websites are becoming more and more like desktop Isettas, web sites trying to emulate something they simply are not and will never be—at least not as long as they remain within what we currently think of as a browser, for the simple reason that browsers, in their original incarnation, were never designed for anything like what many users on the web are using them for today; they were designed for easy exchange of (scholarly) hyper-linked documents. But, of course, the Internet allows for so much more than distributing dry academic fare, so we’ve been busy piling functionality onto our little single-cylinder browser, trying to make web pages mimic desktop apps—Microsoft with ActiveX, Macromedia with Flash, and Google and Yahoo! and just about everyone else with the cobbling together of Javascript and XML functions that we (thanks to fellow IA Jesse James Garrett) know as AJAX. What drives these efforts, of course, is that (except for the likes of the scientists at CERN, where HTML was invented), users never asked for or wanted browsers to work the way they originally worked in the first place. In fact, after the launch of the Mac in the mid 80s, the desktop paradigm sort of became the gold standard for how non-geeks were meant to interact with computers. By the time the web entered mainstream computing about a decade later, anybody with a computer (*nix and DOS heads notwithstanding) was pretty much indoctrinated into a desktop world of files and folders and copying and pasting and dragging and dropping and generally immediate responsiveness. So it’s no wonder that usability had to take center stage in the web world to try to reconcile the unruly and awkward behavior of browsers with the then far more familiar desktop world. And for the last 10 years, we’ve been patting ourselves on the back every time we make browsers work more like desktop apps, which is really to say that we’re working our butts off trying to retain the status quo of ca 1984. Yes, 1984, the year that the Mac was released, because frightening as it may sound, we really haven’t moved much beyond the paradigms that were in place back then. To paraphrase former Silicon Graphics chief scientist Bill Buxton, if a Mac user fell into a coma in 1984 and woke up today, they’d pretty much be able to turn on a computer and generally recognize what they were looking at and be able to use it. That’s 20+ years of patting ourselves on the back for all kinds of user interface innovations, and what do we have to show for it? We’ve transformed web sites into desktop application emulators. Not exactly Insanely Great. Ah, but we’ve done so much more than that you say, like leveraging the power of interconnectivity toward creating the Web 2.0 internet platform, like creating new geographically independent relationships, like making content (theoretically) universally findable. Yes, but do we have browsers to thank for that, or was that just the result of fiber optics and satellites? In other words, what have browsers done for us lately, except create a lot of jobs for people who have mastered (or pulled their hair out trying to master) the black art of web design—a black art because building cars with motorcycle parts will remain a messy business, no matter how hard we might worship at the altar of the Zeldman Church (er, Synagogue…) of Web Standards, the browser will still be a browser and we’ll be stuck inside it. Unless of course Apple and Microsoft (or Google via Sun? or Google via Google…) got smart and decided to do away with the whole browser thing and just make the platform the browser (or the browser the platform or whatever.) No more Safari. No more IE. No more Firefox. Just connectivity. Period. While Microsoft is busy turning their next OS, MS Vista, into a beefed up version of Google Desktop (and unabashedly skinning it to look like Mac OS X), Google and Yahoo! and other future thinkers are busy working on that thing we call Web 2.0, which I expect/hope/pray(?) will finally blow that 20-year-old dinosaur, the desktop paradigm, out of the water. I would venture a guess that the reason why there’s been a lot of buzz surrounding a possible Google browser (such as Google having registered the domain name gbrowser.com) but that we’ve not seen them launch a browser, is because whatever they release will be more like a platform than a browser. When you fire up a future version of the Google Desktop, you fire up Gmail (as in the one they are working on now which will eat Outlook for breakfast), and I am hoping something like a Google Office, where—just like your Gmail data—all your word files or excel files or whatever run not in a local application, but in a remote application, of which you are just temporarily consuming a single instance. All the data and all the applications exist on the gNet, as it were, such that it doesn’t matter what computer you’re using; you always have access to all your personal files (that normally would live on your computer.) No more downloading files or applications to your local computer. No more annoying notices that a new version of this software is available and would you like to download and quit out of all your applications and restart your computer to install it? (I think iTunes for Windows has released 4 such updates to their software in the last month alone.) This concept isn’t new – Marimba and other technologies are based on exactly this paradigm. The big difference is that, with broadband and wireless becoming increasingly pervasive, the groundwork for making such a paradigm a reality on a broad scale has been laid. There are of course major obstacles, such as privacy, trust, and security issues (after all, all your personal files would be living on someone else’s server) but I think the incredible value potential of this model is so high that all such issues will (or already have been) addressed. Web 2.0 should really be called Platform 2.0, because the future incarnation of the “browser wars” will likely be more like platform wars, between Google, Microsoft, and Yahoo! with Mozilla and Apple in the mix somewhere as well. Don’t get me wrong, this doesn’t mean the total extinction of what we currently think of as the desktop; it only means that the use of this paradigm has reached a critical mass (as in you now have thousands upon thousands of individual files on your computer, to the point where trying to manually keep them organized and being able to find them by browsing to them is becoming a tragicomical exercise for everyone except the most anally retentive users) and that I think it will be replaced by a browser-desktop hybrid paradigm, in which you have content and pages but you don’t necessarily have to name everything or place everything in a specific location, rather your content will self-organize, like a kind of personal folksonomy based on your behaviors, habits, and implicit preferences, as well as those within the social networks with which you (and your content) interact. In this future, the browser, like that little three-wheeled single cylinder Isetta, I think will be looked at as a quaint stepping-stone toward a web-based platform, a pretty cool and useful thing that was simply not up to the task of being the vehicle for the Web 2.0 platform.

Napoleon, usability pioneer

At a very fundamental level, user experience is about communication—between the user and the system, between content and objective. And with new sites constantly popping out of the woodwork, user experience is more and more becoming a one-shot deal. You either communicate and deliver what the user was looking for the first time around or they’re off to the next site (which they almost inevitably came across via Google or maybe Yahoo!, which means they were taken directly to the page related to this search and not to the site’s homepage, which maybe provides a nice overview of the site—if your design doesn’t assume that users will jump directly to lower level pages without ever seeing your homepage, you are not designing for The Web That Google Made…) You’ve probably got something like 30 seconds to make your content/objective connection. That means your message needs to be crystal clear to the lowest common denominator user. Of course, it won’t be for every user, but the hope is that it will be for a sufficiently high percentage of users. Now, let’s imagine that you had to get your message through to all your users every time, and that not getting that message through could mean life or death. Ok, I plead guilty to suddenly switching gears to talking about a mission-critical system, but I’ve always found that UX design for the web has so much that it can learn from systems in which there is no room for mistakes. For all the praise given to people like Don Norman and Jakob Nielsen, it’s easy to forget that just because the term ‘usability’ only has been around for, what, a couple decades, user-friendly design has been around for about as long as there has been a need for stuff to be user-friendly, which brings us to emperor and part-time usability pioneer Napoleon Bonaparte.

The emperor needed to get his war strategy missives out to his troops not only fast but reliably, meaning that he needed to ensure that the message he sent—as he understood it—was also the message that was received. Not an easy task back in the day before instant messaging and email, when delivering a message might take days or weeks. If the message was not fully understood by it’s recipient, well, there just wouldn’t be any time to send back a follow-up question. You’ve got your orders and you better know what they mean. So Napoleon came up with a nothing-less-than-brilliant solution to his mission-critical message problem. Knowing the officers receiving his missives might be farm boys brought up quickly through the ranks and of, well, let’s say, questionable intellect (at least when it comes to war strategy.) Regardless, he simply had no idea who the recipient of his message might be (not much unlike sending messages through the Web) and therefore had to assume the worst. So, what better solution than do a bit of message prototyping before sending out it out, yes? Armed with his usability instincts, Napoleon went out among his troops and grabbed the most dense farm boy soldier he could find, promoted him to Lieutenant on the spot and made him part of his personal guard. Then, whenever he needed to send a message to his troops, the emperor would have it written up, presented to his Lieutenant, whom he would then ask to explain the message back to him in his own words. If Lieutenant Knucklehead could understand and explain it back to him, he figured that even the most dense of his officers out in the field would be able to get it. Last time I checked (except for that whole Russian winter thing), Napoleon’s usability testing model worked out pretty well. Aside from the obviously elitist aspects of this approach, there is still much to be learned from the Napoleon School of Usability Testing. His was the ultra-efficient shotgun model of usability testing, in which you test and iterate with one edge case user. It’s simple and it’s fast. On a certain level, it’s not so much about ensuring the user-friendliness of your content, but rather just getting a perspective that is as diametrically opposed to your own as possible, to react to something that may make complete sense to you. And that is really at the core of what we’re doing when presenting concepts to users—using them as a vehicle for stepping outside our own thinking, and into the messily subjective realm of usability testing, helping us to see flaws in our design that we maybe simply were not capable of seeing.