A founder at Startup Burgenland asked me how to price her online course. She had surveyed her email list. “Most people said they’d pay EUR 30-50,” she told me, looking at her spreadsheet with quiet confidence.
I asked her to run a different test. She put up two landing pages with identical copy. One priced the course at EUR 49. The other at EUR 97. She split her traffic 50/50.
The EUR 49 page converted at 3.1%. The EUR 97 page converted at 2.8%.
Nearly the same conversion rate — at double the price. She had been about to leave half her revenue on the table because of a survey.
This is the core problem with pricing research: people are terrible at predicting what they will pay. They anchor to the lowest reasonable number, tell you that number, and then behave completely differently when faced with a real purchase decision. Testing price sensitivity requires putting real prices in front of real people and measuring real behavior.
Why Asking “What Would You Pay?” Fails
The question “What would you pay for this?” triggers a specific cognitive process that has nothing to do with how purchasing actually works.
When someone considers a hypothetical purchase, they scan for the lowest price they have seen for anything similar. They do not evaluate the value to them. They do not weigh the problem’s severity. They find an anchor — usually the cheapest comparable thing they can think of — and adjust slightly upward.
If your product is an online course, they anchor to the last online course they saw advertised, which was probably on sale for EUR 19.99. If your product is a SaaS tool, they anchor to the freemium tools they currently use. Their answer reflects the market’s pricing floor, not your product’s actual value.
Surveys lie about this more than almost anything else. In a survey context, there is zero consequence to stating a low number. No pain, no commitment, no trade-off. The response is fiction.
In a real purchase, the calculus is entirely different. The buyer weighs the price against the pain of the problem, the perceived quality of the solution, the trust they have in the seller, and the opportunity cost of spending that money elsewhere. None of these factors exist in a survey.
The Van Westendorp Method (and When to Ignore It)
The Van Westendorp Price Sensitivity Meter is a structured approach to pricing research that asks four questions:
- At what price would this be so cheap you would doubt its quality?
- At what price is this a bargain — a great buy for the money?
- At what price is this starting to get expensive but you would still consider it?
- At what price is this too expensive to consider?
The intersection points of these curves give you an “acceptable price range.” Marketing researchers love this method because it produces clean charts and precise-looking numbers.
For established products with known competitors and stable markets, Van Westendorp can be useful. For new products, especially from new brands, it is misleading. People cannot calibrate “too cheap” or “too expensive” for something they have never seen before. They anchor to whatever comes to mind, which is usually wrong.
If you use Van Westendorp at all, use it as a starting point — a rough range. Then validate that range with actual purchase behavior. The chart is a hypothesis. Revenue is proof.
Five Methods That Actually Work
Here are the pricing tests I use with the founders I work with. Each produces progressively more reliable data.
Method 1: The Split-Page Test
Create two or three versions of your offer page, identical in every way except the price. Split your traffic between them. Measure conversion rates.
This is what my founder did with her EUR 49 and EUR 97 pages. The result was clear: the market would bear the higher price with minimal conversion loss. Her revenue per visitor nearly doubled.
You need enough traffic to make this meaningful — at least 100 visitors per page version. Below that, the results are noise. If you do not have that kind of traffic yet, use the methods below.
Method 2: The Conversation Close
During customer interviews, after you have explored the problem thoroughly, say: “I’m building something that would [specific outcome]. If I had it ready next week, would you pay EUR [price] for it?”
Then be silent. Watch their face. Listen to their response.
The words matter less than the reaction speed. An immediate “yes” with no hesitation is a green light. A pause followed by “yeah, probably” is a yellow. A long pause followed by “it depends on…” is a red.
Do this with ten people at three different price points. The price where you get the most instant-yes responses is your starting point.
Method 3: The Tiered Pre-Sale
Create three versions of your offer at three price points. Basic, Standard, Premium. Each adds something — more content, faster access, personal support, whatever makes sense for your product.
Run a smoke test with all three tiers visible. Measure which tier gets the most purchases.
The distribution tells you about your market’s price sensitivity. If 80% choose Basic, you are in a price-sensitive market. If the majority choose Standard or Premium, your audience values quality over savings. If nobody buys at all, you have a positioning problem, not a pricing problem.
Method 4: The Anchor Test
Show different groups of people the same offer with different context. Group A sees your product positioned against premium competitors: “Unlike [expensive competitor] at EUR 299/month, our solution is EUR 79/month.” Group B sees it positioned against budget options: “Our premium solution is EUR 79/month — because quality matters.”
Same price. Different framing. The conversion difference tells you how your market perceives value. If the “budget compared to premium” framing works better, your buyers are comparison shoppers. If the “premium because quality” framing works, your buyers are value-driven.
Method 5: The Price Ladder
Start high and work down. This is counterintuitive but powerful.
Launch at the highest price you think the market might bear. If people buy, you found your price — or possibly your floor, since you might be able to go higher. If people do not buy, lower the price by 20% and test again.
Starting high and reducing is better than starting low and raising. Because lowering a price feels like a promotion. Raising a price feels like a punishment. And because revenue at high margins gives you more room to invest in growth, support, and product quality.
The Three Numbers You Need
Price testing produces one number: what people will pay. But you need three numbers to make a pricing decision.
What the market will bear. This is what your tests reveal. The price at which enough people buy to sustain the business.
What your economics require. This is your cost structure. If your product costs EUR 15 to deliver, you cannot sustain a EUR 19 price point. The margin is too thin for marketing, support, refunds, and your own salary. Calculate your minimum viable price — the lowest price at which the business model works.
What your positioning demands. If you are building a premium brand, a low price undermines your positioning. If you are building an accessible product for beginners, a high price creates a barrier. Your price communicates something about who you are and who you serve.
The right price sits at the intersection of all three: what the market will pay, what your economics need, and what your brand promises.
Common Pricing Mistakes First-Time Founders Make
Pricing based on cost. “My materials cost EUR 8, so I’ll charge EUR 20.” This is how you end up working for less than minimum wage. Price based on value to the customer, not cost to you. If your product saves someone 10 hours per month and their time is worth EUR 50 per hour, the value is EUR 500 per month. Your cost of production is irrelevant to them.
Pricing based on competitors. “My competitor charges EUR 29, so I’ll charge EUR 25.” You just told the market you are the cheap version. Every time your competitor lowers their price, you have to lower yours. You are in a race to the bottom, and the bottom has no profit.
Offering only one price. A single price forces a yes/no decision. Three tiers — a decoy low tier, a designed-to-sell middle tier, and a premium tier — convert better because they shift the question from “should I buy?” to “which should I buy?”
Discounting too early. Discounts train your market to wait for sales. Once you start, stopping is nearly impossible. Use discounts sparingly and only for specific, time-limited purposes — early bird pricing for a launch, not “because we need more sales.”
When to Test Again
Your first price is not your final price. It is a starting point.
Test again when:
- You have significantly improved the product
- You are entering a new customer segment
- Your costs have changed meaningfully
- Competitors have shifted the market expectations
- You have been at the same price for more than six months without evaluating
The iteration cycle applies to pricing just as it does to product development. Ship a price. Measure the result. Learn from the data. Adjust. Repeat.
The founder who started at EUR 97 eventually raised to EUR 147 after adding a community component. Her conversion rate dipped slightly. Her revenue per sale increased 51%. She tested. She learned. She acted.
That is the whole system. Stop asking people what they would pay. Start showing them prices and measuring what they do.
The market does not care about your pricing research. It cares about your price. Test that.