Monday, June 25, 2007

Does Geography Matter?

Today I’ve been writing for the new edition of Handbook of Usability Testing about setting up a test environment. Should you be in the lab or in the field? If you’re in the lab, what should the setup be like and why? These seemed like fairly easy questions to answer. But then I got to a question that I’ve been wondering about myself for years: Does geography matter?

Nielsen says it doesn’t

Jakob Nielsen's April 30, 2007 Alertbox (http://www.useit.com/alertbox/user-test-locations.html) says that geography doesn't matter (unless there are international considerations or a single industry dominates the location or a couple of other things). “You get the same insights regardless of where you conduct user testing, so there's no reason to test in multiple cities. When a city is dominated by your own industry, however, you should definitely test elsewhere.”

I sent my question around to several usability testing experts. Jared Spool sent one of the most interesting, but nearly everyone had experience that indicates that geography does matter.

Spool, Killam, and James say it does matter

“Remember,” Jared Spool says, “if you know everything [emphasis mine] there is to know about your users, their tasks, and their contexts, then you never need to test in the first place -- all you need to do is be really smart and create a simple design. At that point, it boils down to a simple matter of programming.”

Bill Killam, of User-Centered Design put it this way:
Performance and subjective preference and motivation are all linked, so any change in location that affects one or more of these can be a factor across all of them. But we usually find it appears only in subjective data – not as much in behavioral observations. Even local variations like testing within the client’s office versus a “neutral lab” sometimes have noticeable effects on things like projected responding. However, also consider regional differences in the use or exposure to the product being tested. That will certainly effect results. Not to use too specific an example, but consider testing voting machines in the DC area versus a rural location. Or DC where paper and DREs [direct recording electronic voting machines] already exist versus NY where a full face ballot is used versus Oregon where all votes are by [mail].
Janice James contributed, "I've found that it IS important to test across multiple locations because I've found that the users do differ in terms of their experience level and exposure to product types, and technology, in general."


Professor Spool and I continued the conversation by IM:

Dana: Okay, so it seems like your answer and Jakob's article come from different assumptions. Jakob seems to assume that the field work is done. The team knows the context, etc. You seem to be saying that teams don't always do the field work, first. By Nielsen’s parking meter example, the design team seems to have some background about the location.

Jared: Except teams always think they know everything.

Dana: I also think Jakob is assuming a fairly mature UX [user experience] group.

Jared: But, Jakob says, except for the few special cases discussed below, we've always identified the same usability findings, no matter where we tested. By now, we can clearly conclude that it's a waste of money to do user testing in more than one city within a country. Good thing he wasn't testing soda. Or pop. Or coke.

Dana: Yes, to your example, testing IA [information architecture] is a REALLY good reason to test in multiple locations. And the design team always will get some benefit from being on site - usually something that wasn't predictable.

Dana: And with the audience for this book, I think it's safe to assume that they won't have done much (or any) field work before doing usability testing.

Jared: Right.

Jared: Testing in more than one locale is definitely a luxury.

Jared: I wouldn't not test at all because you can't get to more than one venue. Another approach is to make it work great for the local community and look to support and other feedback channels to hear if regional differences pop up. It's the cross-your-fingers approach to design. It's worked well through the centuries. Another approach is to look at other competitive/comparable designs for things that might be regional. If the designs have elements that seem different, is there a regional explanation?

Jared: Many design issues are just pure human behavior, independent of any cultural or regional issues.

Dana: I believe that.

Jared: Rolf [Molich] and Carolyn [Synder] did a study where they tested people in two countries on the same sites. They found 80% of the problems were in common. They found regional biases. People in Europe didn't understand the purpose of a gift registry (and found it to be quite vulgar). But, if you perfected the design for your local venue, you'd nail 80% of the problems found anywhere else, if you extrapolate their results. And that's a pretty good hit rate for a small budget.

Dana: I agree.

Jared: My guess is that's what Jakob was trying to say.

Dana: That's possible.

Jared: It's hard to say with his shield of impenetrable ego obscuring the real intent.

Dana: Do you mind if I clean up this thread and use it in a blog post?

Jared: Not at all.

Jared: You can even leave in the impenetrable ego comment.

Dana: Makes it more believable that it was a conversation with Jared Spool.

Jared: Remember, all elephants are tall and flat, except for the instances when they are long and skinny.

Dana: That's right. Anyway, thanks for answering the email and for continuing the discussion. I appreciate it.

Jared: I'm saying his exceptions are the generalized case. And his generalized declaration is rarely executable.

No comments:

Post a Comment