This is a slidecast (slides with audio) of the talk “Agile for the rest of us” I gave at the 2009 IA Summit.
Here is an interesting little message I was greeted by while doing my usual multi-tab sweep of news/todo/email/twitter/etc this morning.
What I think is so interesting about this seemingly innocuous little message is how much it tells you about those who designed it (or not, as it were.) Here are a couple nuggets one might glean:
This message appeared because my session at this site had been automatically ended, since I had not clicked on anything during the time allowed for inactivity during a session. For those not familiar with this, there are two reasons why one would want to time out a session:
In other words, for this business context, a 15 minute timeout makes no sense. I’m guessing this was set by a developer, possibly without even any discussion with the UX designer, maybe because that person wasn’t even aware of this, on one hand very technical issue, but on the other hand one that can significantly impact user experience. And it is incredibly unlikely that this limitation is due to a need to conserve resources. A 1 hour timeout would likely make more sense.
When writing for the web, particularly when writing interface labels and dialog copy, surgical level word choice is critical. What is so unfortunate is that a lot of interface copy seems to default to accusatory and unnecessarily dramatic (“Warning!”) language. An experienced author would have been sure to both strike a more empathetical tone, as well as explain why the session had to be ended. Maybe something like:
You haven’t clicked on anything in the last 30 minutes. To protect the privacy of your personal content, we’ve automatically signed you out. We apologize for the inconvenience.
It’s interesting how much a tiny little detail can tell you about the person who designed it.
Looking forward to the next book club event!
So, I’m about to post a response as to why I both agree and disagree that Twitter sort of has Jumped The Shark of late, when I happened to take a gander at Digg and came across this fine piece of ironic Diggxtapositioning:
To me, this perfectly exemplifies the confused, convoluted, what-the-hell-is-this and by-the-way-I-totally-love-it thing we call Twitter. I actually even go through my own Jekyll and Hyde phases with Twitter, sometimes from one tweet to the next, one moment finding it incredibly useful, particularly when fellow tweeters share first-dibbs info to their followers about upcoming events or useful stuff, such as discounts on products or whatever.
But in the next moment, after that very useful tweet, there will be the inevitably annoying tweet-fart, as in when people share important information such as that they just ran out of toilet paper or that their dog just farted. Ok, I guess that could be funny…
But getting back to the above juxtaposed Diggs about Twitter, the statement that they make more than anything else is that none of us, not even the CEO of Google, really understands Twitter, really knows what to make of it. In some ways, Twitter is a microcosm of what the Internet was back when the only browser was Netscape (ok, and Lynx and Mosaic) and AltaVista was the hot search engine (and Hotbot too) and we were all exploring this amazing World Wide Web novelty store that was derided as totally useless by some and adored, naively or not, by the millions who started getting online back in those days. I remember people back in those days scoffing at the idea that people would buy books online from some weird site called Amazon, instead of buying them in a bookstore. I remember people back in those days scoffing at the idea of being able watch TV online at a time when, at best, the only video you might find on the web were short clips, which were a very big deal to download/view. Well, you get the point. Twitter is just another one of those weird ideas that people like Schmidt, perhaps the most unexpected of Luddites, just doesn’t get, or thinks they get but don’t because fact is, like in the web in its early days, the thing just wasn’t mature enough to really be understood. (Well, except for by a couple guys named Larry and Sergey – what’s the name of that company they started again?)
Oh, and in case you want to actually read the two ironically juxtaposed stories…
The iterative design methodology is, in my opinion, the most effective and powerful approach to designing websites and applications. This is particularly true when comparing it to the more traditional waterfall methodology. While the methodology may be old hat to our more mature cousins, industrial designers, it seems many of us in the UX community are only now discovering this technique. (At the same time, there are aspects of working iteratively when designing digital products that are unique compared to fields such as industrial design.) So, what is iterative design all about, and what makes it powerful?
Iterative Design is, at the surface level, really only different from the waterfall methodology in one way. Instead of specifying the entire application before building it, one fully designs and builds one part of the application, and then uses that and previously completed units as a basis for future design and production. In other words, iterating is designing and, more specifically, understanding what one is designing through actually creating it. Alistair Cockburn describes it as “learning by completing” (and distinguishes it from incremental design, which is about adding new elements, which one can choose to do iteratively, while iterating is about reworking and refining.) Perhaps most importantly, an underlying principle of the iterative method is that until you have actually built what you are designing, you are not going to be able to fully understand it.
So, this one change, of starting to build earlier in the design process and continuing to build as part of the design process, of folding the work of what one in waterfall might think of as “production” into the design process itself, can have a cascading effect on the entire design effort. This can include how your team members are resourced for a project and who works with whom and when. More importantly, it can also empower your team in ways that simply are not possible when working in the more linear waterfall model. For anyone practicing one of the various flavors of Agile, this will all likely look quite familiar, and for good reason, since everything that converged under the Agile umbrella really are just different flavors of iterative (as well as adaptive/lightweight) development methodologies.
There are many great reasons for going iterative. One of the most significant is the ability to discover design problems earlier and thereby reduce overall project risk.
One of the greatest threats to any design endeavor is discovering design problems late in the project lifecycle. The later you make the discovery, and the bigger the problem, the greater the risk to your project.
Some years ago, I worked on a product which was to allow people to maintain and share large policy documents. We spent hours whiteboarding and brainstorming solutions, regularly presented sketches, wireframes, and comps to users for feedback, and produced detailed wireframes and functional specs. Everything was going swimmingly until time came to actually implement our idea.
As it turned out, the time required to save the fairly large documents that users needed to work within a browser-based application was, at a minimum, several minutes. Actually, it was more like 15-20 minutes. In the world of user interfaces, this is effectively an eon. Because we were so far along in our project, and had already made several fundamental design decisions, such as deciding to make the application fully browser-based, our options for addressing this issue were quite limited.
We were forced to go with a solution that we knew would be in conflict with how our users actually worked, breaking up the document into smaller pieces, while knowing that our users in fact tended to make edits or move content across the entire document. This design change would therefore likely reduce the overall value of the product. In other words, the fact that we discovered the problem so late ended up making the overall product far less appetizing to users and therefore less competitive in the marketplace.
Had we taken an iterative design approach, however, this issue would have been discovered much sooner. Instead of defining the design solution for the entire product before actually beginning to build the product, we would have only created a high-level design of the application and then fully designed and built a skeleton version of that, and then evolved the design based on what had been built.
We could also have chosen to build the application incrementally, i.e. building it in different units, and then iterating on each of those units. By unit, we are usually referring to some natural grouping of functionality or application entity. In the case of this application, the document editor would have been such a unit, and would also have been deemed the top priority unit (as it is the core function of the product), and therefore built out first. In doing so, by actually testing our solution with real code, we would have discovered this very technology-specific issue almost immediately.
More importantly, because we would have been at a far earlier stage in the project lifecycle, we would not yet have made any no-going-back fundamental design choices, and would therefore have a much wider range of options in how to address the issue.
In other words, when designing iteratively, you are able to much more easily adapt and change course in response to the unknown or unexpected, in contrast to a waterfall model, where you are much more locked into a specific design direction once production is underway.
This is of course not to say that iterative will allow you to adapt and make big changes up until the very end of the project. Instead, by starting to build earlier, and discovering big issues early, you radically reduce the chance of making show-stopper discoveries late in the game.
When presenting sketches, wireframes, comps, and clickable prototypes to users for their feedback, you are effectively asking them to imagine how the product will work, to prototype it in their mind as it were, and then provide feedback on what they are imagining. This is not always a bad thing; in fact, during the early stages of developing a design idea, it is probably preferable to present something rough to users, inviting them to interpret your idea more widely and also to feel more comfortable rejecting or, better yet, suggesting improvements to your solution. This is a fundamental reason why we sketch.
The problem is the amount of time that passes between that activity and actually confirming or denying whether or not that thing in your head really is a good idea. In the waterfall methodology, the time that passes between those two activities can be a looooong time indeed. More time between conception and validation of an idea means increased risk.
Problems begin to emerge when we are asking user to review the ‘real’ solution. If we present that solution in the form of wireframes, we are placing an incredible cognitive burden on the user to try to imagine the actual product based on looking at one or more static drawings. Talk about Making Me Think! Even if we create a clickable prototype, there is still a large amount of cognitive overhead left to the user. This can include anything from being aware of issues such as page refresh (e.g. a page may not refresh when clicking on a tab in the prototype, but may do so in the real application) to the range of idiosyncrasies of widget controls to the absence of real live content. As applications become increasingly complex, this potential cognitive overhead will continue to grow. And the more cognitive overhead we burden our users with, the less reliable their feedback.
In the iterative model, however, the definition of what is and what is not a validation of a design solution is quite simple: Everything that is not actual working software is just a sketch, an increasingly refined idea.
In this model, we have an actual build, a unit of the actual application, that we can present to stakeholders and allow users to interact with and respond to. When presenting users with something that actually works, they can focus on the activity your product is intended to support, and give you feedback on that. Additionally, technologists can validate their code (as well as learn from and improve upon it when working on future iterations), while visual designers can evaluate their look and feel as rendered by an actual browser.
In a waterfall-based methodology, an incredible amount of time is spent creating and maintaining documents that describe the design solution, from wireframes to functional specs and on and on. As the software systems we are designing become larger and more complex, more time is required to describe their design in ever-greater detail. Worse, once we begin to discover issues with our design solutions, we have to go back and update these tome-size artifacts, which will take up even more time if we are further along in our process and multiple artifacts need to be updated. And to add insult to injury, since there really is no standard for the types of documents we are creating, we have to spend even more time on these documents adding legends and other references explaining the meaning of stuff in the very artifacts intended to explain what stuff means.
This, in my opinion, is a recipe for disaster. For one, it tends to become increasingly impossible to keep all the documents up to date, which means they can’t be relied on, which means all the work you put into them may be for naught.
And second, it seems that the more you document, the less people read, for the simple reason that specification documents are inhumanly boring and require a lot of brainpower on the part of the user to interpret and convert their content into the actual solution. So what ends up happening is that designers and developers sort of skim through all your painstakingly created artifacts, creating something that might be close to what was specced, but rarely exactly what was specced, which seems to make the entire arduous effort sort of self-defeating.
In the iterative model, the approach is to instead leverage the technology itself as a specification platform and to thereby spend much more time being engaged with the actual product, with the actual design, rather then being bogged down with huge artifacts that ultimately having nothing to do with the product.
The idea of using the code itself may sound strange to the non-technical reader, but if you think about it, what is code but nothing more than a set of instructions? And while those instructions may be gibberish to you, they are highly meaningful to developers.
Now, this is not to say that we don’t create any specs as part of an iterative model; the real difference is that the specs that are maintained are only those that are for new design, while as soon as something has been built, the corresponding spec is archived, eliminating the potential for redundancy (and the inevitable accompanying inconsistency) of having both a spec and a build.
If you’re new to iterative and have really only worked in a waterfall process, Iam guessing that it may be difficult to wrap your head around this concept. But hopefully this (somewhat meandering) post will help you get underway.