Rensis
← Back to blog

Concept Testing on a Startup Budget

You do not need a six-figure research budget to test your product concept. Modern synthetic panels make rigorous concept testing accessible to any team.

Most startups skip concept testing entirely. Not because they think it is unnecessary, but because the traditional version of it costs £15,000–£40,000 and takes six weeks. When you are pre-revenue with eighteen months of runway, spending a quarter of your marketing budget on research that delivers results after your launch window has closed is not a realistic option. So the concept goes untested, and the team learns what the market thinks by launching and watching what happens. This is the expensive way to do research.

What Traditional Concept Testing Actually Costs

A standard agency-run concept test involves questionnaire design, panel recruitment, fieldwork, data cleaning, analysis, and a final report. Each stage has costs and timelines. Recruitment alone can take two to three weeks for niche audiences. Analysis adds another week. The total is rarely under £12,000, and for multi-cell designs testing multiple concepts or price points, costs climb quickly toward £30,000 or more.

The hidden cost is rigidity. Once the questionnaire is fielded, you cannot change it. If your first concept scores poorly, you cannot adjust the description and retest immediately. You commission a new wave of fieldwork, adding weeks and thousands of pounds. This makes traditional concept testing a one-shot exercise for most startups, which is precisely the opposite of how early-stage product development works.

Writing a Concept That Can Actually Be Tested

Before you test anything, you need a concept description that a stranger can evaluate. This is harder than it sounds. Most concept descriptions written by founders assume knowledge the reader does not have, use jargon the audience does not recognise, or describe features rather than outcomes.

A testable concept description has four elements:

  • The problem you are solving, stated in the customer’s language, not yours.
  • The solution, described as a benefit rather than a feature. “Saves you three hours per week on meal planning” rather than “AI-powered recipe recommendation engine.”
  • The price, because purchase intent without a price is meaningless. People will say they want almost anything if it is free.
  • The context, meaning how the product fits into what the customer already does. Is it replacing something? Adding to an existing routine? Creating a new behaviour?

If you cannot write this in one clear paragraph, the concept is not ready to test. Ambiguity in the description will produce ambiguity in the results.

Choosing the Right Audience

The most common mistake in concept testing is testing against the wrong audience. “Adults aged 25–45” is not a useful target for most products. What matters is purchase behaviour: do these people currently buy in your category? Do they spend at the level your pricing assumes? Are they active in the channels where you plan to sell?

Behavioural targeting produces dramatically different results from demographic targeting. A meal kit concept tested against “health-conscious millennials” will give you one set of numbers. The same concept tested against “people who currently spend £50–£80 per week on groceries and have purchased a meal subscription in the past year” will give you a very different, and far more actionable, set of numbers.

Synthetic panels allow you to target by purchase behaviour rather than just demographics. This is one of their most significant advantages for startups, because it means your results reflect people who actually buy in your category, not people who merely fit a demographic profile.

Interpreting Purchase Intent Data

Purchase intent scores are the backbone of concept testing, but they are routinely misinterpreted. A common trap is treating the percentage who say “definitely would buy” as your expected market penetration. It is not. Stated intent always overstates actual behaviour. The industry standard is to apply a discount factor, typically counting only “definitely would buy” responses and a fraction of “probably would buy” responses.

What matters more than the absolute score is the relative performance. If Concept A scores 32% top-two-box and Concept B scores 21%, Concept A is meaningfully stronger regardless of whether either number maps directly to real conversion. Use the scores to compare options, not to forecast revenue.

Equally important are the reasons behind the scores. Why do people say they would not buy? The objection patterns tell you what to fix. If the primary objection is price, you have a pricing problem. If it is confusion about what the product does, you have a communication problem. If it is “I already have something that does this,” you have a differentiation problem. Each requires a different response.

Iterating Based on Results

This is where the economics of synthetic research change the game for startups. With traditional methods, iteration means new fieldwork, new costs, new delays. With synthetic panels, you can rewrite your concept description, adjust the price, narrow the audience, and retest within the same session.

A productive iteration cycle looks like this: test your initial concept, read the objections, revise the description to address the most common one, and test again. If purchase intent improves, the revision worked. If it does not, the objection is about the product itself, not how you described it.

You can also iterate on audience. If your broad panel shows 22% purchase intent but one segment shows 41%, test the concept against that segment specifically. Understand what makes them different. This can reshape your entire go-to-market strategy, from a broad launch to a focused beachhead, which is often the smarter play for a startup with limited resources.

The Real Cost of Not Testing

The objection to concept testing is always cost. But the calculation is backwards. The cost of testing is a few hours and a subscription. The cost of not testing is building and launching a product that misses on price, audience, or positioning. That mistake consumes months of engineering, design, and marketing effort. For a startup, that is not a recoverable error; it is a runway problem. The cheapest research you will ever do is the research that prevents you from building something nobody wants to buy at the price you need to charge.