All posts
Entrepreneurship8 min read

The Lean Startup's MVP: What It Actually Means and How to Build One This Week

"MVP" is one of the most misused terms in business. Eric Ries defined it precisely. Here's what it actually means, the mistakes that make MVPs useless, and how to design a real one.

BookSkills Team·April 24, 2026

Somewhere along the way, "MVP" got redefined. In its current popular usage, it means "a rough version of your product" or "the first thing you ship." Teams build MVPs that take six months and cost a quarter million dollars. Others call their polished v1 release an MVP because it's not quite as good as they wanted it to be.

Eric Ries, who introduced the term in The Lean Startup, meant something much more specific — and the difference matters enormously.

The Actual Definition

An MVP is not a product. It's an experiment.

Ries's definition: "the version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort."

The operative word is learning. The question an MVP answers is not "can we build this?" or "does this look good?" — it's "does our core business hypothesis hold up when exposed to reality?"

Every startup is built on a set of assumptions. Usually: that a certain type of customer has a certain problem, that they would find a certain solution valuable, and that they would pay a certain amount for it. The function of an MVP is to test the most dangerous of those assumptions as quickly and cheaply as possible — before you've invested in building the full solution.

This changes what counts as an MVP. A landing page is an MVP if it tests whether people have enough interest to click "buy." A manual service is an MVP if it tests whether customers find the solution valuable before you've automated anything. A single feature is an MVP if it's the feature that validates the core value hypothesis.

The criterion isn't "is this minimal?" — it's "does this test the riskiest assumption?"

The Three MVP Archetypes

The Concierge MVP. You deliver the product experience manually, without the technology. Airbnb co-founder Brian Chesky's early move: he took professional photos of apartments in New York himself, manually managed bookings, and personally handled every transaction. No platform. No automated pricing. No payment system. Just the core experience: a stranger's home, better and cheaper than a hotel. The concierge MVP tests whether the experience is valuable without building any infrastructure.

When to use it: when the riskiest assumption is about whether customers actually want the solution (not whether you can automate it).

The Wizard of Oz MVP. The customer interface looks real, but the backend is manual. A chatbot that looks automated but is actually answered by a human. A recommendation engine that looks algorithmic but is actually curated by hand. The Wizard of Oz MVP tests the customer-facing hypothesis — is this UI/UX compelling? does this experience create value? — before the backend technology exists.

When to use it: when you need a realistic customer experience to test the right behavior, but building the backend would take too long.

The Landing Page MVP. You build a page describing a product and a sign-up or purchase button. You drive traffic to it. You measure what percentage of visitors take the action. You learn whether there's demand before there's a product.

Dropbox's early landing page video — showing a product that didn't fully exist yet — generated 75,000 signups overnight. That told the founders something important: the demand was there. They then built the product.

When to use it: when the riskiest assumption is about whether demand exists, not whether the solution works.

The Mistake Everyone Makes

The most common MVP mistake: building before identifying the riskiest assumption.

Here's the typical pattern. A founder has an idea. They build it. They launch it. Nobody uses it. Post-mortem: the core assumption — that people would pay for this — was never tested. They spent six months and a significant investment to learn something that a landing page would have revealed in two weeks.

The riskiest assumption is the one that, if wrong, makes the entire idea unworkable. Identifying it requires asking: what would have to be true for this to work? List every assumption. Then ask: which of these, if false, would kill the business?

That's the assumption your MVP should test.

How to Identify Your Riskiest Assumption

For most startups, the riskiest assumptions cluster in one of three areas:

Desirability: Do customers actually want this? This is tested by demand — signups, clickthroughs, pre-orders, waitlists, conversations where people ask when it will be available.

Viability: Can you make money from this? This is tested by willingness to pay — not "would you use this if it were free?" but "would you pay $X for this?" The gap between those two questions destroys more startups than any technical problem.

Feasibility: Can you actually build this? For most software startups, feasibility is not the riskiest assumption — modern tools make most software buildable. But for hardware, biotech, or regulation-dependent businesses, feasibility often is the riskiest thing to test first.

The concierge MVP is especially powerful because it often simultaneously tests desirability and viability. If someone pays you to manually do the thing your product will automate, you've validated both.

What Good Validated Learning Looks Like

Validated learning is not: "we got positive feedback in user interviews." Verbal feedback is systematically unreliable — people say they'd use something when asked, then don't use it when it's available. (This is sometimes called "the mom test problem" — people who like you will give you encouraging but useless feedback.)

Validated learning is: a behavior you observed. Purchases. Signups with email. Return usage. Referrals. Data that doesn't depend on someone's stated preference.

Ries is explicit about this. The output of an MVP isn't a report or a set of interview notes — it's a behavioral data point that confirms or disconfirms a specific hypothesis.

The Build-Measure-Learn Loop

The Lean Startup's core framework is a loop: Build → Measure → Learn → (repeat or pivot).

Build: the minimum experiment that will test your assumption. Measure: capture the behavioral data. Learn: update your belief about the assumption based on the data.

If the data confirms the assumption, you invest more. If it disconfirms, you pivot — change something significant about the hypothesis — or persevere with more information. The goal is to run this loop as quickly as possible, which is why the MVP should be minimum — because speed through the loop matters more than the polish of any individual experiment.

The startup that runs 20 Build-Measure-Learn cycles before running out of runway has a higher chance of finding product-market fit than the startup that builds one perfect product.

Running the Lean Startup Framework with AI

The hardest part of the Lean Startup method isn't understanding it — it's the structured thinking required to identify your riskiest assumption before you start building, and to design an experiment that genuinely tests it (rather than confirming what you want to believe).

The Lean Startup BookSkill has an /mvp-designer command that walks you through assumption mapping — what you believe, which beliefs are riskiest, and what kind of MVP tests which assumption. The /experiment-plan command helps you design the specific experiment: what you'll build, what you'll measure, and what result would count as confirmation vs. disconfirmation.

You have an idea. The question is what you need to learn to know whether it's worth building. That's the question an MVP is designed to answer.


Ready to design your MVP? The Lean Startup BookSkill walks you through assumption mapping and experiment design so you learn before you build.