Article type
Practical methods piece / Session facilitation guide
The one thing this piece is trying to say
When PMs first start running interviews, the most common mistake is not “having too few questions”.
It is treating the session as a natural conversation.
Natural conversation can feel warm and easy. But a research session is not meant to be merely pleasant. Its job is to collect analysable evidence without distorting the situation you are trying to understand.
That is why the same interview guide can produce very different-quality material depending on who runs it.
The difference usually has less to do with charisma than with very practical craft:
- how you build rapport without becoming overly affirming
- when to probe and when to stay quiet
- when to ask for a demonstration instead of another explanation
- what to notice in the field, not just what to listen for
- how to take notes without turning yourself into a transcription machine
- how observers can be present without changing the temperature of the room
So this piece is not about “becoming a better host”. It is about this:
how PMs can run research sessions as research, rather than as friendly conversations that happen to produce weak evidence.
Start by dropping a common misunderstanding: friendliness matters, but “natural” is not the highest principle
Many people reduce moderation to two goals:
- make the participant comfortable
- let the conversation flow naturally
Neither goal is wrong. But if you stop there, sessions often drift into one of two failure modes.
The first is over-accommodation. The moderator wants to keep things comfortable, so they nod too hard, complete sentences, and continually mirror the participant’s language. Before long, the participant is speaking into the moderator’s frame rather than their own.
The second is over-control. The moderator is so focused on “getting through the guide” that the session becomes a sequence of efficient prompts. The participant cooperates, but the surrounding context never really surfaces.
A steadier research session sits somewhere in the middle:
- relaxed enough for honest disclosure
- structured enough to stay purposeful
- curious without steering
- open to the participant’s language and pace
- disciplined enough not to dissolve into vagueness
A less romantic but more accurate way to describe moderation is this:
it is a structured way of helping someone open up their world without rewriting it for them.
The real job of facilitation is not to talk more, but to distort less
This is the plainest way I know to think about it:
you are not there to demonstrate understanding; you are there to reduce your own contamination of the evidence.
Once that mindset is in place, a lot of in-session decisions become clearer.
Do not summarise too early on the participant’s behalf
PMs often hear something familiar and immediately want to help by saying:
- so what you mean is …
- it sounds like the real issue was …
- so basically the flow felt too complicated, right?
That kind of question can occasionally help confirm understanding. Used too early or too often, it pours your interpretation back into the participant’s mouth.
More reliable prompts usually sound like this instead:
- you said you got stuck. Where exactly did that happen?
- what was going through your mind at that point?
- you said it felt risky. What kind of risk did it feel like?
- can you take me back to that moment and walk me through it from the start?
The difference is subtle but important.
The first group translates the experience into your frame. The second asks the participant to complete their own.
Do not fill every gap; protect useful silence
New moderators often panic at silence.
But some of the richest material in research shows up a few seconds after the pause.
A participant who stops is not always empty. Often they are reconstructing the situation properly. If you rush in too quickly, you usually get the flattened, cleaner, less useful version.
In research, silence is not necessarily awkwardness. Quite often it is part of the work.
The biggest difference between fieldwork and interviews is not the location; it is that context becomes visible
This is where many PMs initially underestimate fieldwork.
They think field research is just an interview moved into a home, workplace, shop floor, or real operating environment.
But the real difference is not the backdrop. It is that context suddenly becomes observable.
In a meeting room or on a video call, what you usually hear is a participant’s remembered and interpreted version of events.
In the field, you often encounter something else:
- they say the process is routine, but the desk is covered in workarounds
- they say the system is straightforward, but a colleague prompts them at every step
- they say the task only takes three minutes, but all the preparation work is scattered across the previous half hour
- they say a certain piece of information is unimportant, but keep checking it throughout the workflow
This does not mean participants are being deceptive. Many behaviours have simply become automatic and invisible to the person doing them.
So in fieldwork, do not think of yourself as “taking the interview guide to another place”.
What you are really collecting is:
- what artefacts are present
- which tools, scraps of paper, spreadsheets, chats, and improvised aids quietly hold the workflow together
- where the handoffs, waits, interruptions, and switches happen
- who patches over which failures
- which workarounds have become so normal that nobody calls them workarounds any more
What should you pay attention to in the field? I usually start with five categories
1. The tools and objects actually being used
Do not only ask which tools people use.
Look at what they truly touch.
That might include:
- browser tabs
- handwritten notes
- spreadsheets
- internal forms
- Slack or WhatsApp threads
- printed procedures
- personal cheat sheets
Real workflows are often less like the tidy arrow in a product diagram and more like a stitched-together patchwork.
2. Switching costs
A lot of friction is not that a single step is hard. It is that the user is constantly switching.
- between phone and desktop
- between system and paper
- between search, comparison, asking, confirming, and waiting
- between front-stage and back-stage tools
That sort of friction is not always obvious in analytics, but it is often painfully visible in the room.
3. How exceptions get absorbed
People are usually happy to describe the normal flow.
The deeper insight often sits in the exceptions:
- what happens when something fails
- how people improvise when information is missing
- who they ask when permissions are insufficient
- how complaints or unusual cases are handled
A product’s maturity often shows up in how exception cases are absorbed.
Does the system absorb them, or do people absorb them through experience, anxiety, and manual effort?
4. The participant’s own language and categories
How users name things matters.
“Leave it there for now” and “save it” may not mean the same thing.
“Compare a few options” may actually contain filtering, sharing, discussion, and waiting for someone else’s response.
If you translate too quickly into the product’s vocabulary, you can easily lose the participant’s mental model.
5. Time
Many problems are not about missing features. They are about fragmented time.
- waiting for someone else to reply
- needing a particular moment before feeling safe to proceed
- tasks squeezed into commuting time or meeting gaps
- steps postponed until the weekend or month-end
This temporal context is often one of the blind spots that analytics cannot show and interviews alone may oversimplify.
Note-taking is not a transcription contest; it is how you preserve the structure needed for analysis
I have seen two common note-taking failures.
The first is over-recording.
The moderator types furiously throughout the session and ends up missing expressions, pauses, gestures, and environmental cues.
The second is under-recording.
Everything feels memorable in the moment, but later all that remains is a blur of impressions.
A more workable approach is usually:
- the moderator notes only key triggers, follow-up points, and salient observations
- if someone else is helping, they can separate notes into observations, quotes, and timestamps
- if recording is possible, get consent and let the recording protect you from turning into a stenographer
- immediately after the session, create a short debrief note while the context is still warm
I find it useful to separate notes into three layers:
- Observation: what the participant did, looked at, switched to, or referenced
- Quote: representative wording in their own language
- Interpretation / open question: your tentative sense-making, clearly marked as yours
The most dangerous thing is to mix those together. Afterwards you can no longer tell what counted as evidence and what was simply your in-the-moment theory.
Observers are not just extra eyes; they are also an extra risk
An observer does not automatically improve research quality.
Sometimes they make it worse.
The common observer problems are predictable:
- visible reactions that influence the room
- interruptions or stakeholder questions during the session
- selective memory afterwards, where only evidence supporting an existing view is retained
So if observers are present, it helps to set the rules before the session starts:
- who is moderating
- who is taking notes
- observers do not intervene
- questions wait until debrief
- notes should focus on observations, not solution ideas
That is how observers become evidence multipliers rather than contamination sources.
A solid session usually does three things before it ends
1. It grounds itself in at least one concrete episode
If a session ends with “overall it was fine” or “most of it worked”, that is rarely enough.
I usually want to leave with one of these:
- the most recent time
- the most difficult time
- the moment they almost gave up
- the last abandoned attempt
That is what makes the later analysis useful.
2. It circles back to clarify what the room revealed
For example:
- you kept checking that sheet. What role does it usually play?
- I noticed you switched into another system there. Was that because you could not find the information here?
- you mentioned waiting for approval. How long does that normally take?
These are often the most valuable follow-ups in fieldwork.
3. It is followed by an immediate debrief
Treat the debrief as part of the research, not as optional admin.
After a field session, details cool down very quickly.
I usually spend 10 to 15 minutes capturing three things:
- what I am most confident I observed
- what I still cannot conclude
- what I need to ask differently next time
This is not only about organisation. It is also about slowing down your certainty before it hardens too soon.
When should PMs not run the session themselves?
This is not an argument that PMs should become universal researchers.
There are situations where a PM-led session is especially risky:
- the PM is the strongest internal defender of the concept being discussed
- the topic is sensitive enough that participants may not speak freely
- the method requires stronger moderation craft than the team currently has
- the political pressure in the room is high
- the PM is already too invested in validating a particular hypothesis
In those cases, it is often wiser for a UXR to moderate while the PM participates as an observer.
So the real claim here is not “PMs should do everything themselves”.
It is this:
if PMs are going to enter the research room, they should at least know how not to bend the evidence out of shape.
Closing thought
The most valuable thing in interviews and fieldwork is not the memorable quote.
It is the piece of context that survived the session without being flattened into your script.
That is what good facilitation protects.
In the next piece, I will move one step further down the workflow and talk about the part many teams underestimate even more:
how PMs turn transcripts, notes, recordings, and observations into real analysis instead of a handful of dramatic quotes and a slide deck full of impressions.