Tuesday, April 24, 2007

Keeping a rolling list of issues throughout a study

Design teams are often in a hurry to get results from usability studies. How do you support them while giving good data and ensuring that the final findings are valid?

One thing I do is to start a list of observations or issues after the first two or three participants. I go over this list with my observers and get them to help me expand or clarify each item. Then we agree on which participants we saw have that particular problem.

I continue adding to that list the numbers for each participant who had the issue and note any variations on each observation.

For example, in a study I’m working on this week, we noted on the first day of testing that

Participants talked about location, but scrolled past the map without interacting with it to get to the search results (the map may not look clickable)

I went back later and added the participant numbers for those who we observed doing this:

Participants talked about location, but scrolled past the map without interacting with it to get to the search results (the map may not look clickable) PP, P1, P3
Today, I’ll add more participant numbers. At the end of the study, we’ll have a quick summary of the major issues with a good idea of how many participants had each problem.

There are three things that are “rolling” about the list. First, you’re adding participant numbers for each of the issues as you go along. Second, you’re refining the descriptions of the issues as you learn more from each new participant. Third, you’re adding issues to the list as you see new things come up (or that you didn’t notice before, or seemed like a one-off problem).

I will still go back and tally all of the official data that I collected during each session, so there may be slight differences between these debriefing notes and the final report, but I have found that the rolling issues list and the final reports usually match pretty closely.

Doing the rolling list keeps your observers engaged and informed, helps you cross-check your data later, and gives designers and developers something to work from right away that is fairly reliable.

1 comment:

  1. I know this article is WAY old, but I just found your blog, so there ;-)
    Question about your way of managing your rolling list: by refining the issues over multiple testing sessions, don't you end up attributing interpretations and clarifications to every participant who had this particular issue, even if they didn't have the same reason for that issue?

    e.g. three participants skipped the map, but only P3 said it didn't look clickable. If you just refine the issue text, you would attribute that clarification to all three participants. Couldn't that skew your findings to some extent?

    Cheers,
    Jan

    ReplyDelete