After Launching Dozens of Products—Here’s What Most Teams Get Wrong Before They Even Start
Tips for slowing down to speed up!I’ve led or advised on the launch of over 30 digital products in the past two decades. From early-stage apps scrabbling for product–market fit, to established SaaS platforms plateauing after initial success, there’s one recurring pattern I see time and again:
Teams start too far down the funnel.
They dive straight into execution—sketching interfaces, writing tickets, debating technology choices—before they’ve aligned on a few crucial fundamentals:
• Who are we building for?
• What exactly is the problem we’re solving?
• Why now?
• What outcome do we want to change?
When speed is of the essence and deadlines (self-imposed or otherwise) loom there’s this awful knee jerk reaction to just start building. Sometimes you do need to thrash things out but often I’ve found it’s better to slow down to speed up! Here are some of the most common pitfalls I see in early-stage product planning, based on real audits and client work. I’ll also share the practical frameworks to correct course—so you can build something people actually need, not just something that looks good in Figma.
1. Starting with Features, Not the Problem
It sounds obvious. Everyone says they’re customer-centric. But when I begin an audit, the first slide of every pitch deck is usually a list of features—not user pains. The conversation is about what we’re building, not why.
This isn’t just an issue for founders. I’ve seen product managers in £200M revenue businesses kick off roadmapping sessions with Jira tickets titled “AI recommender” and “Premium tier enhancements”, without clarity on what behaviour they’re trying to change.
The result? Teams spend weeks building slick interfaces for problems that don’t matter.
Real-world example:
At one health-tech client, the team had prioritised a daily tracking feature they were convinced would boost retention. It made perfect sense logically—habit formation, daily engagement, etc. But when we interviewed users, we learned they were actively avoiding the app every day. Why? Because using it reminded them of a medical condition they were trying to forget.
We scrapped the feature. Rebuilt the experience to centre around episodic reassurance. Retention improved by 27% in six weeks.
2. The Death of Discovery
Teresa Torres, in Continuous Discovery Habits (2021), outlines a simple principle: “We make better product decisions when we engage with customers every week.” And yet, in over half the companies I work with, there’s no regular discovery practice.
Even worse, when discovery is done, it’s treated as a checkbox exercise: a few user interviews at the start of a project, maybe a survey, then back to business as usual.
This is dangerous.
What I often find is that assumptions baked into the original discovery are still driving decisions months later, even when user behaviour has shifted. I call these “zombie insights”—they’re already dead, but still walking around your roadmap.
What I do in audits:
I ask teams to show me their latest user interview notes, assumptions they’re testing, and current discovery cadence. If they can’t produce them within 10 minutes, we know discovery isn’t operationalised.
Then we build it in—lightweight, fast-turnaround methods:
• 20-minute recurring user interviews
• Assumption mapping workshops
• Simple behavioural data reviews tied to active hypotheses
The goal is not to pause work for discovery. It’s to build discovery into the way work happens. And remember, products and users don’t sit still – what might have worked at one point in time may very well shift months if not weeks later!
3. No Real Product Strategy
A product strategy isn’t a Gantt chart. It’s a set of prioritised, evidence-based bets that link user value to business outcomes.
And yet, many teams I work with confuse strategy with roadmaps. They’ll proudly share a 6-month feature plan, but when I ask:
• Why is this sequence the right one?
• How does this solve the core business problem right now?
• What does success look like, and how will we measure it?
They can’t say.
This isn’t their fault. In a lot of companies, strategy becomes a top-down mandate rather than a co-created lens. But the result is the same—busy teams, building without traction.
Strategy resets I’ve run during ProductMagic Growth Audits typically include:
• Revisiting the product’s core value proposition (based on real user insights)
• Mapping initiatives to actual growth levers (retention, conversion, expansion)
• Using OKRs or North Star metrics to focus execution
Case in point:
At one VC-backed SaaS company, we cut the roadmap by 50%, focused on just two growth loops (activation and referrals), and increased signups by 63% in eight weeks. The difference wasn’t better UX—it was better focus.
4. Misunderstanding the Customer
I’ll put it plainly: most companies are not clear on who their product is for.
They may have a marketing persona. They may have done a jobs-to-be-done workshop. But in practice, product decisions are often based on the founder’s gut or a composite “user” who doesn’t exist. Trust the process!
One of the first things I ask during an audit is:
“If I gave you £100 to acquire 10 new users today, who would you target?”
If the answer is vague or over-broad (“someone who likes fitness”), we have a positioning problem.
Practical fix:
We use tools like:
• JTBD interviews (see Anthony Ulwick’s Jobs to be Done: Theory to Practice)
• Customer journey mapping (with real drop-off points annotated)
• Segment-specific funnels in tools like Mixpanel or Amplitude
At Healthily, where I was CPO, we had to separate three overlapping audiences: consumers, clinicians, and enterprise buyers. Each one needed a distinct message, product experience, and success metric. Lumping them together would have tanked the whole platform.
5. Avoiding Hard Conversations
Let’s be honest: a lot of product decisions are political.
Stakeholders want their pet feature. Sales wants “just one more” enterprise customisation. A senior leader makes a comment in Slack, and suddenly the roadmap is derailed. This drives me nuts. Also responding adhoc to data points daily rather than trends – I worked inside a place last year where everyday was this insane knee jerk flurry of noise….
These moments create inactionable work—features no one truly owns, initiatives with no clear success criteria, and sprints full of hedging.
The most powerful thing an external audit brings is objectivity. I’m not there to win the next promotion. I’m not defending a past roadmap. My job is to hold up the mirror and say:
“Here’s what’s blocking growth. And here’s what we can do about it.”
And more often than not, what’s blocking growth is a lack of hard conversations about priorities, risks, and trade-offs.
So What Should You Do Before You Start Building?
Here’s a simple checklist I give clients before any new product or feature development begins:
✅ | Question |
---|---|
🔍 | What specific user behaviour are we trying to change? |
📈 | What business outcome is this linked to? |
👥 | Who exactly is this for—and who is it not for? |
🧪 | What assumptions are we making, and how will we test them? |
🎯 | What does success look like in measurable terms? |
💬 | What have users told us—directly—that supports this? |
If you can’t answer these questions clearly and confidently, stop building.
Closing Thoughts: Clarity Over Complexity
I created the ProductMagic Growth Audit to help companies slow down just enough to move fast again—but in the right direction. Not with generic “best practices”, but with real-world insight, hands-on investigation, and sharp prioritisation.
If you’re feeling stuck, or sensing that your team is sprinting without traction, it’s not about motivation. It’s probably about clarity. And that can be fixed.