Escaping the MVP Trap: How Value Testing Saves Product Teams

1 May 2026 · 14 min read
A few reflections from this week in product work.
I realised that “Minimum Viable Product” might be one of the most misunderstood phrases in our industry. Somewhere along the way, MVP stopped being about testing ideas and started sounding like a lighter version of your final product. A stripped-down v1, scoped just enough to ship.
What we’re really trying to do is validate whether the idea we have is worth pursuing. That’s it. Not to build faster. Not to cut corners. But to answer the question:
Does this solve a real problem in a way people actually want?
If that’s the goal, then maybe we need a better term. Something that reflects the intent, not just the output.
That’s where Value Hypothesis Testing comes in.
The Problem with "Minimum Viable Product"
The phrase “Minimum Viable Product” has been around for over a decade. Most of us first heard it through The Lean Startup and adopted it as a standard part of how we build. But even now, it’s easy to see how often teams still misinterprets it.
Instead of testing whether an idea has any real value, we jump straight into building and get caught up in scoping, then design, then polishing. Before we know it, the MVP is starting to look a whole lot like a soft launch.
The thinking behind MVP isn't the real problem. Our language around it is.
Just saying "Minimum Viable Product" makes it sound like we're supposed to build something. But that's not the point. The real question is whether we're right about the value we think this thing will deliver.
By focusing on value, we force ourselves to articulate the core assumption we’re testing before writing a single line of code:
And also calling it a hypothesis, we remind ourselves our ideas are unproven. And by emphasising testing, we prioritise speed and customer feedback over premature execution.
What This Article Covers
Practical steps to escape the build trap and validate ideas faster.
Why the term MVP often misleads teams into overbuilding.
How Value Hypothesis Testing keeps the focus on learning.
Misinterpretation of "Minimum"
In theory, "minimum" means the smallest possible effort to validate a hypothesis. But in practice, teams treat it as the first release of a product loading it with "just one more feature" to make it feel complete.
Example: A team building a food delivery app spends months perfecting a multi-restaurant interface before verifying if users even want the service.
Reality: The true "minimum" might have been a manual concierge test (e.g., taking orders via WhatsApp) to prove demand.
As Marty Cagan points out in Inspired, product teams often mistake “minimum” for “first draft.” That’s how you end up sinking time into ideas that haven’t earned it.
"Viable" Misleads Teams
The word "viable" suggests something fully functional. A product that "works." This pushes teams toward engineering completeness rather than learning speed.
Scenario: A SaaS team delays testing their core value proposition because the onboarding flow "isn’t polished enough."
Reality: "Viable" should mean viable for testing, not viable for scaling.

The words we use shape how we think and how we build. When we say "Minimum Viable Product," We think of building small features the user needs. But if we say "Value Hypothesis Testing" instead, the focus moves to what actually matters. Learning whether our idea delivers any real value.
It's a small shift in language. But it changes three big things.
You get clearer about what you're really testing
Most of the time, MVPs turn into feature checklists. Teams start asking, "What's the smallest thing we can ship?" And pretty soon, everyone is back in build mode. Just with a tighter budget and a shorter timeline.
Value Hypothesis Testing turns that around. You start by asking, "What's the assumption we're trying to prove?"
Something like this:
"We believe that [our solution] will solve [this problem] for [these people], and we'll see [this metric improve]."
Here's an example.
An MVP mindset says, "Let's build a basic fitness app with a few tracking features."
A VHT mindset says, "We believe busy professionals will pay for five minute home workouts. Let's test that with a simple landing page and a waitlist."
When you frame the work as a hypothesis, you leave all the cards open because you are trying to figure out what works and what doesn't.

It leaves room for not knowing and takes the pressure off
When we call something an MVP, it starts to feel like the real product. Even if it's early and messy, we built it. So we get attached. And that attachment makes it harder to change course, even when the signs are telling us we should.
Value Hypothesis Testing comes from a different place. We're not sure this is valuable yet. We're testing a belief, not defending a thing we made.
This gives the team permission to be wrong. And that permission creates a safe space for real learning. When you start by saying "we might be wrong," it's a lot easier to eventually get to what's right.

The Atlantic’s Paywall Gamble: How a Simple Test Revealed a Bigger Truth (2018)
By 2018, The Atlantic faced a crisis every publisher feared: ad revenue was crumbling, and the pressure to monetise was mounting. The obvious solution? A paywall. After all, they had a devoted audience—surely their readers would pay for premium journalism.
The team spent months designing the perfect subscription model: pricing tiers, metered access, seamless onboarding. Everything was locked in. But at the last second, doubt crept in.
What if this doesn’t work?
Instead of going all-in, they ran a quiet experiment, exposing only a fraction of readers to the paywall.
The data told a surprising story.
Yes, the die-hards subscribed—but most users hit the paywall and bounced. The real revelation? Readers weren’t resisting the price; they were paralysed by choice. The Atlantic had too much content, and no one knew what was worth paying for.
So they did something radical: they stopped.
Scrapping the full rollout, they shifted focus from access to clarity—sharpening recommendations, refining newsletters, and helping readers cut through the noise. When the paywall finally relaunched, it wasn’t just a gate—it was a guide. Conversions climbed. Engagement deepened.
3. It shifts the goal from polish to pace
The word testing keeps us honest. It reminds the team that we’re not just launching—we’re learning. So instead of asking, “What can we build?” we ask:
What’s the fastest way to know if this matters?
What’s the least we can do to get a real signal?
That shift in thinking unlocks faster, scrappier ways to validate.
Instead of building a whole marketplace, throw up a “coming soon” page and measure interest—just like Zappos did when they started by posting photos of shoes they didn’t even stock yet.
Do not code a new SaaS feature just yet, run a concierge test. Manually deliver the outcome to 10 users. If they love it, then you earn the right to automate it.
Speed beats polish when the goal is to learn what’s worth building in the first place.

The Psychological Shift
MVP thinking asks: “Did we ship it?”
VHT thinking asks: “Did we learn?”
This reframing reduces sunk-cost bias, encourages experimentation, and—most importantly—keeps the team’s eyes on the customer problem, not the product output..
How PMs Fall into the "Build Trap" (And How to Climb Out)
Even with the best intentions, product teams often find themselves stuck in the build trap—shipping features instead of discovering value.
Here’s why it happens and how the Value Hypothesis Testing mindset helps escape it.
1. The Seduction of "Shipping" Over Learning
Why it happens:
Cultural pressure: Organisations reward output (features shipped) over outcomes (problems solved).
False progress: Building feels productive, while testing can seem like "just talking and wasting time."
Twitter’s “Fleets” Feature (2020-2021)
Late 2020: Twitter launches Fleets. Their take on Stories—disappearing posts borrowed from Instagram and Snapchat. The idea was simple: if posts vanish in 24 hours, people might feel less self-conscious and share more. A quick fix for flat engagement. Engineers shipped it fast. Leadership clapped. Competitors? Noted.
But users didn’t care.
The team moved quickly, mistaking speed for insight. No one stopped to ask: Do Twitter users even want this? The assumption was clear—copying a popular format would drive results. But it didn’t. It was activity, not progress.
The data told the story. Most users skipped Fleets entirely. The few who used it? Already tweeting regularly. Not the new voices Twitter hoped to activate.
Instead of sparking conversation, Fleets added noise. Another feature that looked good on a roadmap but meant nothing to the people using the product.
To their credit, Twitter didn’t drag it out. Eight months later, Fleets was gone. VP of Product Ilya Brown summed it up: “This didn’t work. Back to the drawing board.”
Here’s the real lesson: If Twitter had started by asking, “Do people even want more casual ways to share?”—and tested that with small experiments or rough prototypes—they could’ve saved time, money, and a lot of engineering hours.
Instead, they went all-in on a solution before proving the problem. Expensive way to learn, but they took the hit and came back with sharper focus on what users actually cared about.
The VHT approach:
Start with a problem interview. (See The Mom Test by Rob Fitzpatrick.) Focus on validating pain, not pitching ideas or obsessing over your competitors.
Measure progress by what you’ve learned—not what you’ve launched.
Instead of:
“We shipped our MVP this sprint.”
Try:
“We invalidated our pricing assumption in two weeks.
2. Confusing "More Features" with "More Value"
Why it happens:
Internal bias: Teams assume their preferences match customer needs.
Fear of simplicity: Stakeholders worry a barebones test will disappoint users.
The VHT approach:
Start by separating what’s essential from what’s just noise. Tools like the Kano Model can help you tell the difference between must-haves and nice-to-haves.
Then pretotype it. (Hat tip to The Right It by Alberto Savoia.) Don’t build a full uploader—ask about past behaviors or current pain points (e.g., "How do you share medical records today?")
3. Mistaking Early Adopters for Market Fit
Why it happens:
Pilot users usually tolerate incomplete or rough features, creating false confidence.
Early traction can look like product-market fit—but without real usage data, it’s just wishful thinking.
Example: A B2B startup felt confident after 10 pilot customers signed up. But once real pricing kicked in, they all churned.
The VHT mindset: Start by mapping your riskiest assumptions—just like in The Lean Startup.
1. Value Risk
Question: Do users actually care? Test: Create a landing page to validate interest.
2. Feasibility Risk
Question: Can we build it? Test: Run a quick technical spike or prototype.
3. Usability Risk
Question: Can users use it? Test: Test a Figma flow or clickable prototype.
Validate each risk sequentially. If users don’t care, it doesn’t matter if it’s usable or technically feasible.
4. The Roadmap Mirage
Why it happens:
Annual plans reward certainty, not discovery.
And let’s be honest—HIPPOs (highest paid person’s opinions) often set the course.
Example: One team built out a year of features from the exec wishlist. After launch? Crickets. Low usage across the board.
The VHT mindset: Ditch rigid roadmaps. Use an opportunity tree (from Continuous Discovery Habits by Teresa Torres) to stay grounded in real problems:
Start with a business goal: “Increase conversions”
Break it down into customer problems: “Users abandon at checkout”
Test ideas surgically: “Would a 1-click upsell change that?”
Escaping the Trap: A VHT Checklist
For every idea, ask:
What exactly are we assuming? (Write the hypothesis.)
What’s the fastest way to prove/disprove it? (Design the test.)
What will we do if we’re wrong? (Define pivot/kill criteria.)
Practical Steps to Shift from MVP to Value Hypothesis Testing
The shift from MVP thinking to Value Hypothesis Testing (VHT) requires more than a terminology change—it demands a new workflow.
Here’s how to put this into practice, step by step.
Step 1: Define Your Core Value Hypothesis
Stop asking: "What’s the minimum product we can build?" Start asking: "What’s the riskiest assumption we need to validate?"
Use this template:
"We believe [target users] will [expected behavior] because [value proposition], resulting in [outcome]. Our team can deliver this because [capabilities/resources], and we'll know it's feasible when [technical/ops milestone]."
Example:
"We believe busy parents will pay £20/month for pre-packed healthy meal kits because they struggle with meal planning, resulting in 10% conversion from our landing page. Our team can deliver this because we have culinary expertise and local supplier relationships, and we'll know it's feasible when we successfully fulfill 50 test orders manually."
Step 2: Choose the Fastest Path to Validation
Tactics by Hypothesis Type:
1. Problem Existence
Tactic: Customer interviews Example Prompt: "Walk me through how you handle X."
2. Willingness to Pay
Tactic: Fake door test
Example: Show a “Get early access” button and measure clicks or signups.
3. Usability
Tactic: Rough prototype
Example: Test with hand-drawn UI flows, Figma mockups, or interactive prompts.
4. Behaviour Change
Tactic: Concierge test
Example: Manually deliver the service as if the product exists.
Key: Always test with actual users in real-world scenarios (no proxy users!)
Example:
Instead of coding an onboarding flow for your SaaS, try sketching it out on paper or using no-code tools, then record a Loom demo.
Publish then link it to a Typeform asking users to “complete setup.” Track drop-off. You’ll learn more in a day than you would in a sprint.
Pro tip: Ash Maurya’s Experiment Map (from Running Lean) is a gem for matching the right test to the risk you’re tackling.
Step 3: Define Clear Success Metrics (Before Testing)
If you don’t set metrics upfront, it’s easy to slip into wishful thinking—“Three users loved it, so we’re onto something!” or “It just needs more features.” That’s not learning, that’s storytelling.
Value Hypothesis Testing forces a clearer standard: define what success looks like ahead of time, so you can cut through the noise and make confident decisions.
How to Set Useful Metrics (Without Fooling Yourself)
Tie your metric to the hypothesis you're testing—not just activity.
If you're testing problem importance, look for:
% of users who bring it up unprompted
Emotional weight (e.g., “How painful is this?” rated 1–10)
If you're testing willingness to pay, track:
Conversion on a pricing page
Real intent: credit card entry via Stripe mock checkout
Ditch the vanity metrics.
❌ “1,000 visitors” means nothing without action.
✅ “10% clicked ‘Buy Now’ at £50/month” tells you something worth acting on.
Set Thresholds in Advance
Example: "If 15% of users sign up for our waitlist, we’ll build the beta. If <5%, we’ll pivot."
Use ranges for uncertainty: "5-10% = ambiguous; run another test. >10% = proceed.
Step 4: Run the Test — Then Stop
Here’s where teams get stuck: the endless loop of tweaking, retesting, and hoping numbers go up. Don’t fall for it.
Enforce a Clear Finish Line Timebox the test. Set a hard deadline: "We’ll run this fake door test for 7 days—no extensions." Book the decision meeting before in advance. Don’t leave the ending open-ended.
Stick to the Success Criteria Use your pre-defined metrics to cut through the debate: "We said <5% conversion = pivot. We got 4%. We’re done."
Document Learnings Use this simple format to document outcomes:
Experiment Summary
What we tested
We wanted to know if people would pay for AI-generated workout plans.
What happened
Only 2% of users converted. Our target was 10%.
Why it didn’t work
Most people didn’t trust a fully automated plan. They wanted advice that felt personal and tailored.
Next step
We’re pivoting to a hybrid model. The core plan will still use AI, but with input or review from a human coach.
One Mental Hack
Celebrate the kill. Frame it as a win: "We just saved 3 months of dev time and £200K. That’s a successful test."
That’s the discipline VHT demands—and the clarity it delivers.
Institutionalise Learning
Most teams still get rewarded for shipping features. But if we want to build things that matter, we need to reward learning—especially when it leads to not building.
Start with a Hypothesis Backlog
Instead of saying, “Here’s our Q3 roadmap,” shift to: “Here are the top 3 riskiest assumptions we’re testing next.”
Example:
“Freemium users will upgrade for advanced analytics.”
“SMBs want API access, not just a UI.”
“Users will complete a 2-step onboarding flow.”
Prioritise based on two things:
Risk — what breaks the business if we’re wrong?
Leverage — what unlocks the most value if we’re right?
Make Learning a Weekly Habit
Build a lightweight ritual into your team’s rhythm: a “What did we learn?” sync. Every test gets reported in a simple format:
What we tested: “Pricing page A/B – £50 vs. £99”
What we saw: “£50 had 2x conversions, but lower LTV”
What we’re doing next: “Proceed with £50, test add-on upsells”
Every quarter, run a “Killed Ideas” retro. Celebrate the ones that saved you time, money, and focus—like, “We saved £1M by not building X.”
Reward Learning, Not Just Launching
Track the right metrics for product teams:
How many risky assumptions did we test this quarter?
What % of features were validated before we built them?
Are we increasing our learning velocity over time?
Comments (0)
Sign in to leave a comment.