Tuesday, February 17, 2009

Looking for love: Deciding what to observe for

Last winter I worked with a team that wanted to find out whether a prototype they had designed for a new intranet worked for users. Their new design was a radical change from the site that had been in place for five years and in use by 8,000 users. Going to this new design was a big risk. What if users didn’t like it? Worse, what if they couldn’t use it?

We went on tour. Not to show the prototype, but to test it. Leading up to this moment we had done heaps of user research: stakeholder interviews, field observations (ethnography, contextual inquiry – pick your favorite name), card sorting, taxonomy testing. We learned amazing things, and as our talented interaction designer started translating all that into wireframes, we got pressure to show them. We knew what we were doing. But we wanted to be sure. So we made the wireframes clickable and strung them together to make them feel like they were doing something. And then we asked (among other things):

  • How well does the design support the tasks of each user group?
  • How easily do users move through the site for typical tasks?
  • Where do they take wrong turns? What trigger words are missing? What trigger words are wrong?

Validating the research
In some ways, you could look at this as a validation test – not validating the design necessarily, but instead validating the user research we had done. Did we interpret our observations correctly by making the right inferences, in turn getting us to the design we got to?

What was possible: where the design might break
To find out, we had to answer those Big Questions. What were the issues within them that we wanted to investigate? Let’s take an example: How easily do users move through the site for typical tasks? We wanted to know whether users took the same path we wanted them to take, and if they didn’t, why not. On a task to find forms to open a brokerage account, we listed the possible issues. Users might

  • start at the wrong place in the site
  • get lost
  • pick the wrong form
  • not recognize they’ve reached the right place

From that discussion of the disasters that we could imagine came a list of behaviors to observe for, or as my friends at Tec-Ed say, issues to explore:

  • Where do participants start the task?
  • How easily do participants find the right form? How many wrong turns do they take on the way? Where in the navigation do they make wrong turns?
  • How easily and successfully do they recognize the form they need on the gallery page?
  • How well do participants understand where they are in the site?

What we saw
From these questions, we learned that we got the high-level information architecture right – most participants recognized where to enter the site to find the forms. We also learned that there were a couple of spots in the task path that had a combination of weak trigger words and other distractions that drew attention away from the things that would have gotten participants to the goal more quickly. But the groupings on the gallery page were pretty successful; most participants picked the right thing the first or second time. It was easy to see all of this in the way participants performed, but we also heard clues from them about what they were looking for and why.

And, by the way, the participants loved it. We knew because they said so.

No comments:

Post a Comment