Published ∙ 4 min read

The feedback fallacy

Brian Swift

Brian Swift

CEO, Twine

The feedback fallacy

After leading product at several high-growth SaaS companies and advising even more, I’ve noticed a pattern that costs companies millions in wasted effort and missed opportunities. I call it the feedback fallacy – the illusion that with just enough customer data, organized just the right way, product decisions will become obvious and irrefutable.

The perfect system I built (and later deleted)

In a VP Product role, I inherited what I thought was a mess. Customer feedback scattered across Slack, Jira, email, and our support platform. No unified tracking system, no clear prioritization framework. “Amateur hour,” I thought.

Within months, I had architected what I considered a significant improvement.

Our unified feedback engine pulled from seven data sources, including several manual forms from our teams. Every item categorized. We weighted feedback by customer segment, ARR, renewal upside, and NPS score. Our scoring algorithm incorporated 10+ factors, including business impact, technical complexity, and strategic alignment.

Every month, this analysis would generate a pristine priority ranking that, in theory, represented objective truth about what we should build next. The product team called it “a game-changer.”

There was just one problem: it was a colossal waste of time.

Six months later, despite our perfect system, we released features that flopped. How? Our sophisticated system had prioritized a feature that represented 24% of the feedback. But the scoring algorithm missed crucial context that couldn’t be quantified: those customers were asking because a competitor had it, but none were actually using it after implementation.

This painful, expensive failure forced me to confront an uncomfortable truth. I had spent more time perfecting the feedback system than understanding the underlying customer problems.

The illusion of objectivity

The true cost of my elaborate system wasn’t just my team’s time – it was the false confidence it created.

When our executive team questioned the failed feature, I pointed to the dashboard: “The data told us to build this.” This wasn’t just deflection; I genuinely believed our system was “correct”. The more sophisticated our weighting algorithms became, the more I trusted them over human judgment — and the more wrong I was.

I’ve since recognized this pattern across dozens of product organizations:

  1. Team builds increasingly complex feedback system
  2. System creates illusion of objective, data-driven decisions
  3. Decision-makers stop questioning the outputs
  4. Crucial context and nuance get filtered out
  5. Team builds wrong things with absolute confidence

The biggest irony? Products built to solve actual customer problems often came from insights our system never captured – a passing comment in a customer call, an offhand remark from a churned user, a pattern noticed by an attentive CSM.

What the best teams do differently

After my expensive lesson, I studied how the most successful product organizations I knew approached customer feedback.

One company had no formal feedback tracking system at all. Instead, they required every product team member to speak with at least five customers per week. Their CPO told me: “We value living, ongoing relationships over internal process.”

Another consistently outperforming team used a simple two-page document for each potential initiative, with three sections:

  • Problem statement (in customer’s words)
  • Evidence (direct quotes, not counts)
  • Why now? (market timing, strategic context)

When I asked about prioritization frameworks, their Head of Product said: “We hire for judgment, not spreadsheet skills.”

The five practices that actually work

I’ve rebuilt my approach from the ground up. These five practices have served me well:

  1. Embed in customer reality, don’t abstract it. Replace feedback tracking busywork with direct, frequent customer conversations. Have everyone participate, not just researchers or PMs.
  2. Collect evidence, not statistics. A single customer articulating a problem clearly is worth more than 50 +1s in a feature request column. Gather rich, specific evidence rather than sanitized data points.
  3. Discuss and debate directly, don’t automate judgment. The most valuable insights emerge when teams examine raw feedback together and apply their diverse perspectives. No algorithm can replace thoughtful human discussion.
  4. Use speed as a competitive advantage. While competitors build elaborate feedback systems, you can be shipping solutions. Act decisively on clear patterns without perfect validation.
  5. Judge the outcome, not the process. The only meaningful metric for any feedback system is whether it leads to products customers value. Anything else is vanity.

Unlearning the perfect for the effective

Dismantling my elaborate system wasn’t easy. I’d invested my ego in it. My team had spent hundreds of hours building it. Leaders had praised it.

But the evidence was undeniable: teams moving faster, building better products, and creating more customer value were spending less time on feedback systems, not more.

The first step to avoiding this is always the same: accepting that customer understanding is a continuous human practice, not a data problem to be solved.

Your customers don’t need you to classify their feedback perfectly. They need you to understand their problems deeply and solve them quickly. Everything else is just busy work.

Share this post

LinkedIn X