Surfacing the Quiet Signals

A lightweight system for regular user insight

Some users give feedback freely.

 Others don’t.

Not because they don’t have opinions,

but because the room wasn’t built for them.

How do you design a feedback loop that makes space for them?

Not once. But every sprint.

The Problem

Most feedback comes from the loudest channels or PMs collecting secondhand sentiment. But beneath that surface is always a quieter group: the thoughtful, hesitant, too-busy or too-introverted users.

User contact can be scattered, reactive, dependent on feature rollouts or support escalations. There can be no structure, no rhythm and crucially, no space to build relationships with users over time.

TL;DR

Goal: Build a lightweight, repeatable discovery rhythm that captures user feedback and feeds directly into sprint decisions.

My role: Set up and ran the system end-to-end (research ops + facilitation), created templates/scripts, coached teammates (incl. devs) to participate and run sessions.

Methods: “User Touchpoints” cadence, Microsoft Bookings for scheduling, defined session formats (feature interview / feedback / open), debrief templates, weekly Hotjar watch parties.

Outcome: A sustainable continuous-discovery loop that made user insight visible, shareable, and actionable across the team.

  • Consistent cadence: regular touchpoints became “how we work,” not an occasional research event.
  • Faster alignment: fewer opinion loops because the team sees the same evidence together
  • More actionable delivery: clearer acceptance criteria + immediate sprint actions driven by observed friction.
  • Shared ownership: more non-designers involved in discovery, increasing buy-in and follow-through.

The New System: From Ad-Hoc to Intentional

I introduced a lightweight, repeating structure called User Touchpoints. Inspired by the principles of Continuous Discovery Habits, the goal was to build a sustainable rhythm of small, frequent conversations.

We used Microsoft Bookings to create a low-barrier scheduling system. Users could self-select a session that fit their time zone and calendar. Sessions were listed with clear purposes and open slots, giving users full autonomy over their involvement.

Sessions are short and focused. They are run directly by design or developers, often 1:1 or in small pairs.

Short term vs. Long term Results

Naturally introducing this kind of system requires some strategy. It takes deliberate effort; you can’t just share a link and expect people to open up and come back. It needs consistent visibility and light facilitation to build trust. But once the team starts acting on what we hear, and users can see their input reflected in the product, momentum builds. Participation shifts from “a one-off chat” to an ongoing relationship: users return, they bring others and the feedback becomes richer over time. The long-term value is a steady give-and-take between users and the product team, grounded in repeated conversations and visible follow-through.

New Feature Test Script_page-0001

Feature Interviews 

Early-stage ideas and upcoming tools

Existing Feature Interview Script_page-0001

Feature and Design Feedback

Existing Features, Mid-stage visuals and interaction flows

Open-Ended Feedback Script_page-0001

Open Sessions

Open-ended questions and observation, often focused on workflows or pain points

Making It a Team Practice

This wasn’t a UX-only effort. As momentum grew, the structure created a safe, guided space for developers and other team members to participate directly as well. Initial sessions were run by design (me), but over time, developers began joining in and eventually running their own. Support was provided in the form of:

  • Scheduling templates
  • Conversation prompts
  • Debrief guides for capturing and sharing learnings


Instead of asking developers to read research summaries, this approach gave them direct exposure to nuance and context. It changed the way features were discussed and implemented.

BUT WAIT — THERE IS MORE!BUT WAIT — THERE IS MORE!BUT WAIT — THERE IS MORE!

Weekly Hotjar Watch Parties 🎉

Hotjar watch parties are a simple weekly ritual: we get in a room, put a handful of real user recordings on the screen, eat chippies and watch together. There’s always a theme, usually something from the current sprint or a known pain point and occasionally it’s sparked by a recording that’s so unexpectedly rough it earns an immediate “we need to see this together.” We have opted for Hotjar, but the same approach should work with any behavioral data tool that shows real user interactions.

Why We Started Doing This

In the same way Insight Touchpoints helped us move from ad-hoc feedback to intentional conversation, watch parties helped us move from ad-hoc observation to intentional shared understanding. Touchpoints are where users tell us what they’re trying to do, what matters to them, and where they feel stuck. Watch parties are where we see what happens when they actually do it: quiet friction, unspoken mental models, and the “I’ll just… deal with it” moments that never become support tickets.

It closes a common gap: the thoughtful, articulate feedback from Touchpoints is invaluable, but it’s still only part of the picture. Watching behavior adds the missing texture: what people hesitate on, what they misinterpret and what they workaround without ever reporting it.

The Vibe Matters

This isn’t a formal research ceremony. It's a calendar invite, a room booked on a day we’re all in the office + a bowl chips and anyone from the team that's interested. Not a meeting. The conversation always stays curious.

Because we’re watching as a group, patterns become obvious fast. We often end up discussing things like responsiveness that breaks on smaller screens, inconsistent behaviors across similar components and workflow loopholes. It’s the kind of “it technically works, but it feels bad” UX that users learn to tolerate… until it becomes the reason they stop trusting the product.

Short term, it creates alignment without the debate. Instead of relaying findings secondhand, we’re all looking at the same moment, at the same time. That makes it easier to agree on what’s broken, what’s merely annoying, and what’s worth fixing now. The output tends to be very practical: clearer problem framing, sharper acceptance criteria and a small set of actions we can take straight into the sprint.

Long term, it’s about building a culture of steady user exposure. The hope is fewer “surprise” usability issues, stronger shared mental models across design and engineering, and more confidence in prioritisation because the evidence is visible. Over time, it should make our product instincts sharper and our quality bar higher, simply because we keep returning to the same grounding question: what is it actually like to use this?

Good research is consistent.

We stopped asking: “Do we have time for user research this sprint?”
and started asking: “What kind are we doing this week?”

View