Article type

Professional method piece

The one thing this piece is trying to say

A great many research rounds do not fail at the discussion guide. They fail earlier.

They fail because the wrong people walked into the room.

  • you spoke to people with no relevant recent experience
  • you recruited loyal existing users when you needed novices
  • you found articulate participants rather than appropriate ones
  • you recruited whoever was easiest to book rather than whoever could answer the question

So recruitment is not just logistics. It is a more fundamental activity:

define the evidence your study actually needs, then use recruiting and screening to keep the wrong people out before the session even begins.

Start by dismantling a very common myth: it is not “five people”, it is “which five”

The phrase “let’s just test it with five users” is dangerous not because the number is always wrong, but because it conceals the harder and more important question:

who exactly are those five people, how recently have they done the thing you care about, and how well do they match the decision you are trying to make?

If you are studying new-user onboarding and recruit experienced customers; if you are trying to understand churn and only speak to active users; if you are looking at a B2B admin workflow and recruit someone adjacent to the job rather than the person doing it; if you are researching a high-stakes choice and the participant has no authority to make that choice;

then the sessions may run smoothly, but the evidence will still be badly misaligned.

That is why I prefer to think of recruitment as a question of criteria design, not diary management.

Step one is not sending invitations. It is defining recruitment criteria.

I usually split the criteria into three layers.

1. Core fit: is this actually the kind of person this study needs?

This is the most basic layer.

For example, perhaps you need people who:

  • genuinely booked accommodation in the last three months
  • abandoned payment in the last month
  • still rely on spreadsheet-based workarounds for operational work
  • are actively evaluating alternatives but have not switched yet
  • tried your product for the first time and did not reach activation within a week

The most important principle here is to define fit through recent behaviour and actual experience, not through self-description or vague identity labels.

“Are you someone who travels often?” is usually too soft. “Have you personally completed three or more online accommodation bookings in the last six months?” is much sharper.

2. Variation: which differences need to be preserved in this round?

Not every sample should be uniform.

Depending on the question, you may need meaningful variation across factors such as:

  • novice versus experienced users
  • high-frequency versus low-frequency users
  • business accounts versus individual accounts
  • mobile-first versus desktop-first behaviour
  • decision-makers versus executors
  • urban versus non-urban contexts
  • participants with accessibility needs or assistive-technology use

This is not about ticking a neat diversity box. It is about protecting the differences that genuinely affect friction, choice, and behaviour.

3. Exclusion: who would distort the study if they were included?

This is the layer teams neglect most often.

Common exclusion candidates include:

  • people too close to the product team
  • people who have participated in too much recent research
  • enthusiasts who are interested in the topic but are not target users
  • people whose professional expertise makes them unusually sophisticated or forgiving
  • people standing in for the real role without the same responsibilities or permissions

Not everyone who is willing to join should be recruited. Willingness only tells you they are willing. It tells you nothing about whether they can answer your question.

Only then should you decide where to find them

There is rarely one perfect recruitment channel. I tend to think about six broad sources, each with different trade-offs.

1. Existing user lists

This is often the fastest route.

It is useful when you need:

  • active users
  • people in a known behavioural segment
  • recent completers of a specific action
  • recent cancellations, refunds, or drop-offs

The advantage is targeting precision. The risk is that you only hear from people already within reach, while missing non-users, lapsed users, switchers, or potential customers.

2. CRM, newsletters, and research opt-in pools

If you already maintain a research opt-in pool, this can save a lot of time.

But these participants are often more feedback-friendly, more research-literate, and sometimes more forgiving than your average user. For some studies that is perfectly workable. For others it softens the evidence.

3. Recruitment agencies or research panels

These become especially useful when you need:

  • members of the general public
  • particular professions or contexts
  • multiple regions
  • participants with accessibility requirements
  • harder-to-reach audiences

An agency can be very effective, but only if your brief and screener are precise. Otherwise they will simply find the wrong people very efficiently.

4. Communities, professional bodies, and third-party organisations

If your users sit within a clear social or professional context, this route can be more accurate than broad outreach. Teachers, carers, freelancers, software engineers, people using certain assistive technologies, patient groups, and specialist communities often fall into this bucket.

Still, access to a community is not the same thing as relevance. Not everyone in the community will fit your study.

5. Pop-up or intercept recruitment

If you need people who have just completed a behaviour or are currently in the relevant setting, intercept recruiting can be powerful.

For example:

  • someone who has just completed a counter-service process
  • someone currently in a library, shop, clinic, school, or office carrying out the activity you care about
  • someone who has just finished an application or purchase flow

The advantage here is freshness of context and memory. The downside is that it is poorly suited to long, sensitive, or highly complex sessions.

6. Internal users

If you are working on internal tooling, your colleagues may indeed be real users. If you are working on public or customer-facing products, however, internal staff are rarely a safe stand-in. Their knowledge, motivations, vocabulary, and tolerance for friction are often too different.

A screener is not an admin form. It is your first methodological filter.

The purpose of screening questions is not to collect tidy paperwork. It is to decide whether this person belongs in the study.

I tend to use a few simple principles.

Principle 1: ask about past behaviour, not future intention

Instead of asking:

  • would you consider using a product like this
  • are you interested in travel planning research

you are usually better off asking:

  • have you arranged accommodation for yourself in the last three months
  • when did you last compare accommodation options
  • what was the last time you abandoned a booking, and why

Past behaviour is usually far more reliable than stated intention.

Principle 2: avoid making the “right” answer obvious

If your screener asks things like:

  • do you often struggle with accommodation comparison
  • do you frequently feel frustrated by poor transparency

you are effectively coaching people on how to qualify.

A good screener sketches someone’s experience profile. It should not teach them the winning lines.

Principle 3: put eligibility first, softer questions later

Confirm that the person is the right fit before gathering richer preference or attitudinal detail. Otherwise you end up collecting attractive data in the wrong place while failing to establish fit.

Principle 4: do not rely on a single item; use a pattern

Very few people should be included or excluded based on one answer alone.

What tends to matter is the combination of:

  • how recently they performed the behaviour
  • how often they do it
  • which role they play in the process
  • which tools or workarounds they use
  • where decision authority sits
  • whether they match the variation your study needs

The types of participants PMs most often recruit badly

1. Loyal power users

These people are easy to book, articulate, and usually generous with feedback. But if your question is about novices, switchers, people at risk, or high-anxiety decision-making, they are often the wrong sample.

2. People who speak well but lack real experience

Some participants are excellent at conversation and opinion, but have not actually done the thing you are studying. That kind of data feels smooth and polished, but is often misleading.

3. Proxy participants

This happens a lot in B2B and household decision contexts. The true decision-maker is absent and you speak to an executor, or the reverse. Both roles may matter, but they do not produce the same evidence and should not be treated as interchangeable.

4. Professional research participants

People who have done a great deal of research are not automatically unusable, but they may be more adept at interviews, more familiar with common task patterns, and more aware of what researchers are looking for. Depending on the question, that can distort the evidence.

I do not think of these as legal appendices. I think of them as part of research quality.

Incentives

A sensible incentive improves show-up rates and makes it more feasible to recruit harder-to-reach participants. But the incentive should not be structured in a way that encourages people to game the screener or exaggerate fit.

Participants should understand:

  • what the research is for
  • whether audio or video will be recorded
  • how the data will be stored
  • whether they can withdraw
  • what may be quoted anonymously afterwards

Privacy

Recruitment often involves personal contact details, work background, accessibility needs, or other sensitive information. That data should not simply be dumped into a spreadsheet and forgotten. Even a lean PM-led process needs a minimum level of data-handling discipline.

A practical recruiting workflow for PMs

If you are a PM trying to run a lean round of user research yourself, I would suggest a flow like this:

  1. write the research questions before writing the invite
  2. define core fit, variation, and exclusion
  3. decide which participant groups the round genuinely needs
  4. write the screener, starting with recent behaviour and role
  5. choose the source: existing users, agencies, communities, intercepts, internal users, or a mix
  6. prepare the invitation, reminders, and a backup list
  7. be explicit about incentive, duration, format, consent, and privacy
  8. confirm eligibility again before the session so you do not discover the mismatch at the last minute

One final reminder for PMs

A lot of PMs find recruitment irritating because it lacks the glamour of insight and the rhythm of experimentation.

But research quality often turns on this exact stage.

You can have a strong guide, thoughtful facilitation, and a neat analysis framework. If you recruited the wrong people, all that competence will merely scale the wrong conclusion more elegantly.

The next piece picks up exactly where this one leaves off: how outreach, screening, incentives, and consent can become a repeatable operating system rather than a frantic last-minute scramble.