Superbloom

A screener is a questionnaire that helps researchers recruit the most appropriate participants for their user study research.

Here is an example we used for our mobile messaging study in NYC. Blue Ridge Labs handled the recruiting. Most of this screener’s questions are a standard part of how they work with potential participants. Our questions, in red, focus on messaging and attitudes towards privacy. Additional questions about VPN use, email, and getting online were for our Fellow Gus Andrews’s research.

This example question sorts candidates into how frequently they message:

About how many messages would you say you send via your phone in an average week? This includes text, Facebook messages, WhatsApp, etc.

  • Less than 10
  • Between 11-30
  • 31-50
  • More than 50

Questions like this can allow researchers to select a balanced mix of participants – for example, a few who message infrequently and a few who message heavily. In our case, we wanted frequent messagers, and used this question to meet with people sending 31 or more messages per week. (In practice most participants sent more than 50.) In recruiting terms, we screened out people sending 30 or fewer messages.

Screeners are a regular part of user research for studies on everything from breakfast-cereal selection to the usability of enterprise software. Professional recruiters rely on screeners to match participants with projects. Even if a team handles their own recruiting, such as by posting a listing in a café or on a message board, screeners can still be helpful. The process of creating one helps a team be very specific about who they want to talk to, and forces them to clarify who they think will use (or not use) their product.

Tips for Effective Screeners

Here are some considerations for writing effective screeners for privacy-preserving technologies and beyond.

Don’t ask for information you don’t need. When you complete a screener, you gather personally-identifiable information about people and are responsible for storing it securely. Is gender identity relevant? Income? Zip code? Nationality? There are good reasons for collecting answers to those questions, but if they aren’t directly relevant to your study, don’t ask. Potential participants are more likely to complete a shorter, more-targeted questionnaire. The faster you get enough completed questionnaires, the sooner you can recruit enough candidates to start your study. Our example screener included questions from our partner Blue Ridge Labs that weren’t immediately relevant to our work. The disadvantage of having extra questions was offset by the advantage of having our partner manage participants’ information. Simply Secure has no way to identify or contact the participants since we never had access to the data.

Instead of yes/no questions, ask multiple choice or open-ended questions. Yes/no questions tend to be more leading and encourage the potential participant to answer correctly rather than truthfully. For example, ask “How many text messages do you send in a week?” instead of “Do you send more than X messages?”

People are more than their demographics. Think of ways to ask questions about behaviors or attitudes, not just descriptive demographics like age. “So easy your mother could use it” assumes that mothers have more trouble using technology than other groups. Don’t assume that people in a particular demographic will all be the same or have the behaviors you’re looking for. Tech enthusiasts and tech-avoidant people come in all ages, sizes, colors, and genders.

If the research is in-person, rather than on-line, screen for people who want to share their stories. Screeners are used for both qualitative, in-person research and quantitative, online research like surveys. If you’re recruiting for in-person research, you’ll want to talk to potential candidates first to see how comfortable they are responding to questions. Having a thick accent, making grammar mistakes, or being shy shouldn’t be barriers to participating in an interview. But someone who is ambivalent about the topic or uninterested in answering questions won’t be as good at inspiring a design team empathize with them. Our screener with Blue Ridge Labs used “must talk in more than one-word answers” as a criterion.

Availability questions can shorten time between screener and study. Ask which of several times participants are available for a feedback session. For example, consider this question:

Select all times you are available to meet at [address]:

  • Monday, June 3 at 1:00pm
  • Monday, June 3 at 2:30pm
  • Tuesday, June 4 at 6:00pm
  • Tuesday, June 4 at 7:30pm
  • None of the above

Questions like this remove the need for an extra step in determining availability. Be sure to ask how people want to be contacted for follow up for their available times. If you plan to meet someone Monday, don’t send emails to a work address they don’t check over the weekend.

Make room for non-users. Teams can learn a lot from carefully listening to people who aren’t already interested in their projects. Making room in the screener to capture people who, for example, haven’t heard of two-factor authentication can lead to important insights about designing broadly-appealing, generally-accessible software.

Get the Right People

Screeners are helpful for making sure studies get the right mix of participants. They are also part of the foundation of a process for getting critical user feedback early and often. Taking the time to ask a few basic questions up front can set up the whole study – and project – for success.

Ame and a participant sitting at a table.
Learning from a Blue Ridge Labs participant recruited using our screener.

Resources:

Simply Secure’s NYC Mobile Messaging Screener

Further Reading:

GV’s How to Find Great Participants (includes screener worksheet) Spring UX’s Managing the Recruit