Showing posts with label screening. Show all posts
Showing posts with label screening. Show all posts

Thursday, January 5, 2012

Four secrets of getting great participants who show up


What if you had a near-perfect participant show rate for all your studies? The first time it happens, it’s surprising. The next few times, it’s refreshing -- a relief. Teams that do great user research start with the recruiting process, and they come to expect near perfect attendance.

Secret 1: Participants are people, not data points
The people who opt in to a study have rich, complex lives that offer rich, complex experiences that a design may or may not fit into. People don’t always fit nicely into the boxes that screening questionnaires create.

Screeners can be constraining not in a good way. An agency that isn’t familiar with your design or your audience or both -- and may not be experienced with user research -- may eliminate people who could be great in user research or usability testing. Teams we work with find that participants who are selected through open-ended interviews conducted voice-to-voice become engaged and invested in the study. The conversation helps the participant know they’re interesting to you, and that makes them feel wanted. The team learns about variations in the user profile that they might want to design for.

Secret 2: Participants are in the network
Let’s say the source is panel or a database (versus a customer list). People who sign up to be in panels or recruiting databases tend to be people who take part in studies to make easy money. Many are the kind of people who fill out surveys to win prizes. These people might be good participants, or they might not.

Teams that find study participants through personal, professional, and community networks find that when the network snowball of connections works, people respond because they’re interested and have something to offer (or a problem you might solve for them with your design).

They also come partially pre-screened. Generally, your friends of friends of friends don’t want to embarrass the people who referred them. If the call for participants is clear and compelling, the community coordinator at the church, school, club, union, or team will remember to mention the study as soon as they encounter someone they know who might fit. Don’t worry: the connections soon get far enough away from you and your direct network that your data will be just as objective and clean as can be.

Secret 3: Participants want to help you
They want to be picked for your team. They want to share their experiences and demonstrate their expertise. When teams are open to the wide range of participants’ experiences, they learn from participants during screening. Those selected become engaged in the research. These are participants who call when they’re going to be late, or apologize for having to switch times. They want to work with you. One team we worked with had a participant call from a car accident before calling the police. (They rescheduled!)

Secret 4: Participants need attention
You know all the details that go into a study. Participants need confirmation and reminding. Teams that send detailed email confirmations get respectable show rates. Teams that send email confirmations, and then email reminders just before the sessions get good show rates. Teams that send email confirmations, email reminders, and then call the participants to remind them in a friendly, inviting tone get stellar show rates.

Some teams use the call before the session to start the “official” research. Rather than the recruiter doing the final call, the researcher phones to explain the study and the roles, and ask some of the warm up questions you might normally start a regular session with. These researchers establish a relationship with the participant. They also get a head start, leaving more time when they’re face-to-face with a participant to observe behavior rather than interview.


Perfect attendance is worth the effort
When all the scheduled participants show up, the gold stars come not only for efficient use of the time in the lab and keeping clients and team members eyes and ears with users. It’s likely the team ends up with better, more appropriate, more informative participants, overall. That means better, more reliable data to inform design decisions.

Friday, January 16, 2009

Yes or No: Make your recruiter smarter

In response to my last post about writing effective screeners, c_perfetti asks:

I agree open-ended questions in a screener are best.

But one reason some usability professionals use 'yes/no' questions is because they don't have confidence that the external recruiters can effectively assess what an acceptable open ended answer would be.

In some cases, they may find that asking a 'yes/no' question is the safer approach.

How would you handle this concern?


You asked a great open-ended question! What you need is a smarter recruiter.

There are two things you can do to make your recruiter smarter: brief her on the study, and give her the answers.

Brief your recruiter

Basically what we’re talking about is giving your recruiter enough literacy in the domain you’re in to be intelligent when screening rather than a human SurveyMonkey. You can make them work smarter for you by doing two things:

  • Spend 15 minutes before the recruit starts explaining to the recruiting agency the purpose and goals of the study, the format of the sessions, what you’re hoping to find out, and who the participant is. For this last, you should be able to give the agency a one- or two-word envisionment of the participant: “The participant has recently been diagnosed with high cholesterol or diabetes or both and has to make some decisions about what to do going forward. She hasn’t done much research yet, but maybe a little.”

  • Insist that the agency work with you. Tell them to call you after the first two interviews they do and walk through how it went. Questions will come up. Encourage them to call you and ask questions rather than guessing or interpreting for themselves.

With this training done, you can trust your recruiting agency a bit more. If you continue to work with the agency, over time they’ll learn more about what you want, but you’ll also have a relationship that is more collaborative.

Tell the recruiter what the answers might be

Now, to your question about Yes/No.

Using Yes/No leads to one of two things: inviting the respondent to cheat by just saying “yes!” or scaring the respondent into giving the “wrong” answer because it might be bad or embarrassing to give the “right” answer. In the screening interview, this can be scary or accusatory to the respondent: “Do you have high cholesterol?” (And saying “no” would disqualify him from the study.) Or just super easy to say “yes” because the question is too broad or ambiguous. “Do you download movies from the Web?” could be stretched to mean ‘watch videos on YouTube,’ or bit torrenting adult entertainment, but what it means is ‘Do you use a service from which you get on-demand or instant access to commercial, Hollywood movies and then watch them?’

If it’s the main qualifier for the study – Do you do X? – that can be avoided by putting out the call for participants the right way. Check the headlines on craigslist.org (usually in Jobs/ETC or in Volunteers), for example. There you’ll see pre-qualifying titles on the postings, and that’s the place to put the question, “Do you have high cholesterol?” or “Do you use a headphone with your mobile phone?” You still have to verify by asking open-ended questions.

If you find yourself wanting to ask a Yes/No question:

  • Craft an open-ended question and provide what several possible right answers might be for the recruiters to use as reference (but not something they should read to respondents). Possible alternative script for the recruiter:

“Tell me about the last cholesterol test you had. What did the doctor say?”
[Recruiter: Listen for answers like this
___ He said that I’m okay but I should probably watch what I eat and get more exercise. My total cholesterol was <-200>.
___ He said that if I didn’t make a change I’d have to start taking meds/a prescription/away my cheese. My total cholesterol was <200-239>.
___ He said that I am a high risk for heart disease. I could have a heart attack. My total cholesterol was <240+>]

  • Think of one key question that would call the respondent out on fibbing to get into the study. For a gaming company, we wanted people who had experience with a particular game. Anyone can look up the description of a game online and come up with plausible answers. We added in a question asking what the respondent’s favorite character was and why. Our client provided a list of possible answers: names and powers. The responses were fascinating and indicated deeper knowledge of the game than a cheater could get from the cover art or the YouTube trailer.

The short answer: You should still avoid Yes/No questions in screeners. First, think about what you’re really asking and what you want to find out by asking it. Is it really a yes/no question? Then train your recruiter a little bit beforehand, and anticipate what the answers to the open-ended questions might be.

Thursday, January 15, 2009

Why your screener isn't working

I get that not every researcher wants to or has time to do her own recruiting of participants. Recruiting always seems like an ideal thing to outsource to someone else. As the researcher, you want to spend your time designing, doing, and analyzing research.

So, you find an agency to do the recruiting. Some are very appealing: They're cheap, they're quick, and they have big databases of people. You send requirements, they send a list of people they've scheduled.

How do you get the most out of an agency doing the recruiting? Write a great screener -- and test it. How do you get a great screener? Here are a few tips.

Seven screener best practices

  1. Focus questions on the behavior you want to see in the test. For example, for a hotel reservations website, you might want to know Does the person book his own travel online? For a website for a hospital network, the behavior might be Does the person have a condition we treat? Is the person looking for treatment?

  2. Limit the number of questions. If the question does not qualify or disqualify a respondent for the study, take the question out. If you want to collect information besides the selection criteria, develop a background questionnaire for the people selected for the study.

  3. Think about how you're going to use the data collected from the screener. Are you going to compare user groups based on the answers to screener questions? For example, if you're asking in your screener for people who are novices, intermediates, and experts with your product, are you actually going to have a large enough sample of participants to compare the data you collect in the usability test? If not, don't put requirements in your screener for specific numbers of participants with those qualities. Instead, ask for a mix.

  4. Avoid Yes/No responses. This is difficult to do, but worthwhile. Yes/No questions are very easy for respondents to guess what the "right" answer is to get into the study. In combination, a series of gamed Yes/No responses can make a respondent look like he fits your profile when he really doesn't.

  5. Ask open-ended questions if at all possible. This gets respondents to volunteer information in answer to a real question rather than picking the "right" choice from a list of options that the recruiter reads to them. You can give the recruiter the choices you think people will come up with and a pick list for the recruiter to use to note the data. But the recruiter should not read the list to the respondent. For example, on the hospital website, you might ask, "Tell me about your health right now. What were the last three things you visited a doctor for?"

  6. Avoid using number of hours or frequency as a measure of or a proxy for expertise. I was looking for tech savvy people for one study. One respondent told us she spent 60 hours a week on the Web. When she got into the lab, it was clear she didn't know how to use a browser. When I asked her what she does on the Web, she said this computer didn't look like hers at all. That she starts in a place where she clicks on a picture and it brings up her favorite game. Turns out, her son-in-law had set up a series of shortcuts on her desktop. She knew the games were on the Web, but that was all she knew about the Web.

  7. Watch the insider jargon. If you're using industry or company terms for products or services that you want to test, you may prime respondents for what you're looking for and lead them to giving the right answer. Again, open-ended questions can help here. This is where you start looking at your product from the user's point of view.

Need help developing a screener? Need help with doing recruiting? Contact me about recruiting services my company offers. We've got a great process and a 90% show rate.

Wednesday, November 26, 2008

Recruiting 101: Treat your test participants like humans

One of the most often asked questions I get at talks and workshops is What about recruiting – how do I do a better job of that part of a usability test? One way is to ensure that you’re remembering that the people you recruit are humans. I wrote about this topic for Boxes & Arrows.

Thursday, April 12, 2007

The Hardest Part: Getting the right participant in the room

This week has proved to me that that nothing -- nothing -- matters as much as having the right participants.


Without the right participants, it all falls apart
If you don't have participants who are appropriate, you can't learn what you want to learn because they don't behave and think the way real users do. You may get data, but what does it mean? Not much.


Who's the right participant?
The right participant is a person. Not a set of demographics or psychographic data taken from market segementations. It's easy to lose sight of the idea that the person sitting in the chair using the product you're testing is a person and not a tool for you to identify design problems – a substitute for you. He or she is a person with a personality, habits, memories, beliefs, attitudes, abilities, intelligence, experience, and relationships. You want the person to bring those things with them (along with their computer glasses). That's the stuff of mental models. That's what makes the sessions interesting and unpredictable.


How do you know?
You should be able to visualize who participant-person will be by talking about the kinds of things you want them to do in the session. Here's an example from a study I'm working on right now. We want

Someone who travels at least a few times a year and stays a couple of nights in a hotel on each trip. This person books his own travel because it's quicker and easier than giving instructions to someone else. He likes to book online because he can see options and amenities that inform his final decisions. This traveler knows where he's going, how to get there, and what to do on arrival.


There's a task with a context: booking travel accommodation online. There are motivations: it's comparatively easy and there's decision-making information available that isn't otherwise. There is a level of experience in the task domain: traveling a few times a year and staying in hotels.

Visualizing participants this way is a technique I borrowed from User Interface Engineering.

You can create a screening questionnaire from that description that should get you appropriate participants. And look, there are very few selection criteria embedded in the visualization. We don’t care what the annual household income is, or the education level, or even what the person’s job is. Don’t make this too hard for yourself by collecting data you’re not going to use. (Besides, then you have to protect that personal information, but I’ll talk about that later.)

Now, share your test objectives and your visualization of the participant with your recruiter.


Stay tuned for much more about recruiting, like how to work with a recruiter, where to find the right participants, and lessons that Sandy and I have learned through dozens of recruits.