Showing posts with label testing in the wild. Show all posts
Showing posts with label testing in the wild. Show all posts

Monday, August 2, 2010

Is your team stuck in a bubble?

This happens. The team is heads down, just trying to do work, to make things work, and then you realize it. Perspective is gone.

Recently I gave a couple of talks about usability testing and collaboratively analyzing data. There was a guy in the first row who was super attentive as I showed screen shots of web sites and walked the attendees through tasks that regular people might try to do on the sites.

Sweat beaded on his brow. His hands came up to his forehead in the way that someone who has had a sudden realization reacts. He put his hand over his mouth. I assumed he was simply passionate about web design and was feeling distressed about the crimes this web site committed against its users.

Turns out, he was the web site's owner.

This I found out at a break. When people started filing in from lunch to start the next session, this fellow appeared in my second session. I had time to talk with attendees, so I decided to approach him. "Hi. I noticed you were in my first session. Glad you're back. I hope the first was useful." He said yes, he had found it useful. But he frowned. "You look puzzled. Do you have a question I didn't answer?"



The bubble is insidious

"No," he said. "But it's clear that I have been -- along with a whole lot of other people -- out of touch."

"Oh? You got some insights today, already?"

"Some especially applicable insights, actually. The site you used this morning as your example is the site I work on every day." He gave a sad grin.

I knew this day would come. I would get caught out critiquing or running a demonstration on a site for which the owner was present. That day had arrived.

"I should have talked with you beforehand," I said. "The site has some classic problems. That's why I chose it as an example. It is one of dozens of sites in this domain that have similar issues. If I did or said anything that embarrassed you or your team, I apologize."

He sighed. "Not at all. You can't be embarrassed by something you weren't aware of." He went on, "We hadn't looked at the site at all from the point of view of users outside the organization. We've been in a bubble."

He actually seemed grateful. "Ah. That explains it," I said.

We chatted some more about the political pressures and the technology constraints that his team -- most teams -- faced in creating a great web site and maintaining it.  There had been some usability testing on intranets and even on extranets. But it was a few years ago. And the audience for the public-facing web site was different from the internal-facing web apps.


Perspective comes from observing real users doing real stuff

The best tool for resolving disputes within a design team, for making design decisions based on data rather than opinion, is sitting next to someone who is a real person who wants to accomplish something as they use your design to do it.

Some people call this usability testing. Call it whatever you want (except "user testing"). You can make it simple or complex, but when boiled down to its essence there are three ingredients:

- Someone to try out your design.
-  Somewhere to test.
- Something to study.

That's it. You can do it by the book, or you can do it very simply and ad hoc. The insights come from observing, first hand. I've seen just an hour of observation get many teams out of their own, customized bubbles.


Supporting great design: features of bubble prevention

Fortunately, my new friend stayed for the second session, in which I gave my recipe for supporting great experiences:

- Each phase includes input from users.

- The team is made up of people each with multiple skills from various disciplines.

- Management of the team is supportive an enlightened about the importance of the user experience.

- Everyone is willing to learn as they go along.

- The team has defined their usability goals and knows how they will measure their success.

Note that of the five attributes, two are directly about perspective (input from users; learning). Another two are about creating an infrastructure for getting and using that perspective (multidisciplinary team; setting usability goals). The remaining one (enlightened management) means there's support for getting and keeping perspective.

The importance of perspective cannot be overstated. Teams that meet with users regularly – every week or every month – turn out great experiences. Observing users regularly, at every phase of a design, gives a team evidence on which to make design decisions. More importantly, that act of being present with users, can bring the team together, enlighten management further, and give a needed break from the rarefied space most of us work in every day.

Get out of your head and into your users' .

Tuesday, June 30, 2009

Testing in the wild, seizing opportunity

When I say “usability test,” you might think of something that looks like a psych experiment, without the electrodes (although I’m sure those are coming as teams think that measuring biometrics will help them understand users’ experiences). Anyway, you probably visualize a lab of some kind, with a user in one room and a researcher in another, watching either through a glass or a monitor.

It can be like that, but it doesn’t have to. In fact, I’d argue that for early designs it shouldn’t be like that at all. Instead, usability testing should be done wherever and whenever users normally do the tasks they’re trying to do with a design.


Usability testing: A great tool
It’s only one technique in the toolbox, but in doing usability testing, teams get crisp, detailed snapshots about user behavior and performance. As a bonus, gathering data from users through observing them do tasks can resolve conflict within a design team or assist in decision-making. The whole point is to inform the design decisions that teams are making already.


Lighten up the usability testing methodology
Most teams I know start out thinking that they’re going to have a hard time fitting usability testing into their development process. All they want is to try out early ideas, concepts and designs or prototypes with users. But reduced to its essence, usability testing is simple:
  • Develop a test plan and design
  • Find participants
  • Gather the data by conducting sessions
  • Debrief with the team

That test plan/design? It can be a series of lists or a table. It doesn’t have to be a long exposition. As long as the result is something that everyone on the team understands and can agree to, you have written enough. After that, improvising is encouraged.

The individual sessions should be short and focused on only one or two narrow issues to explore.


But why bother to do such a quick, informal test?
First, doing any sort of usability test is good for getting input from users. The act of doing it gets the team one step closer to supporting usable design. Next, usability testing can be a great vehicle for getting the whole team excited about gathering user data. There is nothing like seeing a user use your design without intervention.

Most of the value in doing testing – let’s say about 70% – comes from just watching someone use a design. Another valuable aspect is the team working together to prepare for a usability test. That is, thinking about what Big Question they want answered and how to answer it. When those two acts align, having the team discuss together what happened in the sessions just comes naturally.


When not to do testing in the wild: Hard problems or validation
This technique is great for proving concepts or exploring issues in formative designs. It is not the right tool if the team is facing subtle, nuanced, or difficult questions to answer. In those cases, it’s best to go with more rigor and a test design that puts controls on the many possible variables.

Why? Well, in a quick, ad hoc test in the wild, the sample of participants may be too small. If you have seized a particular opportunity (say, with a seatmate on an airplane or a bus, as I have been known to do – yeah, you really don’t want me to sit next to you on a cross-country flight), a sample of one may not be enough to instill confidence with the rest of the team.

It might also happen, because the team is still forming ideas, that the approach in conducting sessions is not consistent from session to session. When that goes on, it isn’t bad necessarily. It can just mean that it’s difficult to draw meaningful inferences about what the usability problems are and how to remedy them.

If the team is okay with all that and ready to say, “let’s just do it!” to usability testing in the wild, then you can just do more sessions.


So, there are tradeoffs
What might a team have to consider in doing quick, ad hoc tests in the wild rather than a larger, more formal usability test? If you’re in the right spot in a design, for me doing usability testing in the wild is a total win:
  • You have some data, rather than no data (because running a larger, formal test is daunting or anti-Agile).
  • The team gets a lot of energy out of seeing people use the design, rather than arguing among themselves in the bubble of the conference room.
  • Quick, ad hoc testing in the wild snugs nicely into nearly any development schedule; a team doesn’t have to carve out a lot of time and stop work to go do testing.
  • It can be very inexpensive (or even free) to go to where users are to do a few sessions, quickly.


Usability testing at its essence: something, someone, and somewhere
Just a design, a person who is like the user, and an appropriate place – these are all a team needs to gather data to inform their early designs. I’ve seen teams whip together a test plan and design in an hour and then send a couple of team members to go round up participants in a public place (cafes, trade shows, sporting events, lobbies, food courts). Two other team members conduct 15- to 20-minute sessions. After a few short sessions, the team debriefs about what they saw and heard, which makes it simple to agree on a design direction.


It’s about seizing opportunity
There’s huge value in observing users use a design that is early in its formation. Because it’s so cheap, and so quick, there’s little risk of making a mistake in making inferences from the observations because a team can compensate for any shortcomings of the informality of the format by doing more testing – either more sessions, or another round of testing as follow-up. See a space or time and use it. It only takes four simple steps.

Thursday, January 8, 2009

Testing in the wild defined

Lately I’ve been talking a lot about “usability testing in the wild.” There are a lot of people out there who make their livings as usability practitioners. Those people know that the conventional way to do usability testing is in a laboratory setting. If you have come to this blog from outside the world of user experience research, that may never have occurred to you.


Some of the groups I’ve been working with recently do all their testing in the wild. That is, they never set foot in a lab, but instead conduct evaluations wherever their users normally do the tasks the groups are interested in observing. That setting could be a grocery store, City Hall, on the bus, or at a home or workplace – or any number of other places.


A “wild” usability test sometimes has another feature: it is lightly planned or even ad hoc. Just last night I was on a flight from Boston to San Francisco. I’ve been working with a team to develop a web site that lists course offerings and a way to sign up to take the courses. As I was working through the navigation and checking wireframes, the guy in the seat next to me couldn’t help looking over at my screen. He asked me about the site and the offerings, explaining that they looked like interesting topics. I didn’t have a prototype, but I did have the wireframes. So, after we talked for a moment about what he did for a living and what seemed interesting about the topics listed, I showed him the wireframe for the first page of the site and said, “Okay, from the list of courses here, is there something you would want to take?” He said yes, so I said, “What do want to do next, then?” He told me and I showed him the next appropriate wireframe. And we were off.


I learned heaps for the team about whether this user found the design useful and what he valued about it. It also gave me some great input for a more formal usability test later. Testing in the wild is great for early testing of concepts and ideas you have about a design. It’s one quick, cheap way to gain insights about designs so teams can make better design decisions.

Wednesday, November 26, 2008

Insights quickly and cheaply

After I gave a day-long seminar and a short talk at UI 13, I sat down with Tim Keirnan of Design Critique to talk about doing usability testing in the wild for quick, cheap insights from users. Download that podcast.

Monday, October 20, 2008

Ditch the book - Come to a virtual seminar on "usability testing in the wild"

I'm excited about getting to do a virtual seminar with the folks at User Interface Engineering (www.uie.com) on Wednesday, October 22 at 1 pm Eastern Time. I'll be talking about doing "minimalist" usability tests -- boiling usability testing down to its essence and doing just what is necessary to gather data to inform design decisions.

If you use my promo code when you sign up for the session -- DCWILD -- you can get in for the low, low price of $99 ($30 off the regular price of $129). Listen and watch in a conference room with all your team mates and get the best deal ever.

For more about the virtual seminar, see the full description.

Sunday, September 14, 2008

Usability testing in the wild – ballots

I’ve been busy the last few weeks doing some of the most challenging usability testing I’ve ever done. There were three locations where I did day-long test sessions. But that wasn’t the challenging part. The adventure came in testing ballots for the November election.

What was wild about it?
This series of tests came together through a project with the Brennan Center for Justice and the Usability Professionals’ Association. The Brennan Center released a report in July called Better Ballots, which reviewed ballot designs and instructions, finding that

  • hundreds of thousands of voters have been disenfranchised by ballot design problems
  • there has been little or no federal or state guidance on ballot design that might have been helpful to elections officials who define and design ballots at the local level
  • usability testing is the best way to ensure that voters can use ballots to vote as they intend

Also in the report, the Brennan Center strongly urged election officials to conduct usability tests on ballots. The recommendation to include usability testing in the ballot design process is a major revelation in the election world. The UPA Voting and Usability Project has developed the LEO Usability Test Kit to help local elections officials to do their own simple, quick usability tests of ballot designs.

But not all local elections officials were ready to do their own usability tests, and some wanted objective outsiders to help evaluate ballots for this particular, important upcoming election.

I did tests in three locations -- Marin County, California, Los Angeles County, California, and the home of Las Vegas in Clark County, Nevada -- with about 40 participants across the three locations. Several other UPA volunteers conducted tests and reviews in Florida, New Hampshire, and Ohio. In addition, UPAers trained local elections officials on usability testing and the LEO Test Kit in Ohio, Iowa, and a couple of other spots I can’t think of right now.

Pulling together a test in just a few days, including recruiting and scheduling participants
The Brennan Center report was released toward the end of July. Most ballots must be ready to print or roll out right now, the middle of September. The Brennan Center sent the report to every election department in the US and the response was great. Most requests came in in August, so among the five or six UPA Usability and Voting Project members available, we scrambled to cover the requests for tests.

We had the assistance of one of the Brennan Center staff to help coordinate recruiting, although it took some pretty serious networking to get people in to sessions on short notice, often within a few days.

The Brennan Center covered the expenses, but the time and effort spent by the people who worked with local elections officials and conducted the sessions was purely pro bono.


Not knowing what I would be testing until I walked onto the site
For two out of the three tests, I hadn’t seen exactly what I was going to be testing until I walked in the door of the election department. (I got the other ballot two days before the test.) This happened for a couple of reasons. Sometimes the local election official didn’t have a lot of information about what could be evaluated and how that might happen. Sometimes the ballot wasn’t ready until the last minute because of final filing deadlines or other constraints. Sometimes it was all of the above.

Fortunately, the main task is pretty straightforward: Vote! Use the ballot as you normally would. But there are neat variations. Are there write-ins possible? On an electronic voting machine, how do you change a vote? What if you’re mailing in a ballot – what’s different about that and how do design and instructions have to compensate for not having poll workers available to ask questions of?

Giving immediate results and feedback
So, we got copies of ballots or something close to final on an electronic voting machine. We’ve met briefly with the local elections officials (and often with their advisory committees). We’ve recruited participants (sometimes off the street). We’ve conducted 8 or 10 or 15 20-minute sessions in one day. Now it’s time to roll up what we saw in the sessions and to talk with the person who owns the ballot about how the evaluations went.

Handling enthusiastic observers and activists
A lot of people are concerned with the usability, accessibility, and security of ballots and voting systems. You probably are. Some are more concerned about it than others. Those are the people who show up to observe sessions. They’re well informed, they’re enthusiastic, and they’re skeptical. The observers and activists (many signed up to be test participants) were also keenly interested in understanding this activity. How was this different from focus groups or reviews by experts? How do we know that the problems we’ve witnessed are generalizable to other voters in the jurisdiction?


The good news: Mostly, the ballots worked pretty well. The local elections officials usually have the ability to make small changes at this stage and they were willing, especially to improve instructions to voters. By doing this testing, we were able to effect change and to make voting easier for many, many voters. (LA County alone has more than 3 million registered voters.)

Links:
Brennan Center for Justice report Better Ballots
http://www.brennancenter.org/content/resource/better_ballots/

UPA’s Voting and Usability Project

http://www.usabilityprofessionals.org/civiclife/voting/
voting@usabilityprofessionals.org

LEO Usability Testing Kit

http://www.usabilityprofessionals.org/civiclife/voting/leo_testing.html

Ethics guidelines for usability and design professionals working in elections
http://www.usabilityprofessionals.org/civiclife/voting/ethics.html

Information about being a poll worker
http://www.eac.gov/voter/poll%20workers

EAC Effective Polling Place Designs
http://www.eac.gov/election/effective-polling-place-designs

EAC Election Management Guidelines
http://www.eac.gov/election/quick-start-management-guides