Rensis
← Back to blog

Why 100 Survey Responses Beat 10,000 — If You Ask the Right People

The obsession with large sample sizes misses the point. Precision targeting based on real purchase behaviour produces better insights from 100 respondents than 10,000 random ones.

If 1,000 survey responses are good, 10,000 must be better. This assumption drives companies to spend months and tens of thousands of pounds chasing massive sample sizes. It is also mostly wrong, at least for the kind of research product teams actually need.

One hundred well-targeted responses from people with verified purchase history in your category will outperform 10,000 responses from a generic panel for almost any product decision. The difference is not volume. It is whether your respondents have any relationship to the thing you are asking about.

The Myth of the Large Sample

The obsession with large samples comes from a reasonable place: sampling theory. Larger samples reduce sampling error. A survey of 10,000 people has a margin of error of roughly ±1%, compared to ±10% for 100 respondents, assuming simple random sampling from a well-defined population.

But margin of error only measures precision: how consistently you would get the same answer if you ran the survey again. It says nothing about accuracy: whether you are measuring the right thing in the first place. A survey with ±1% error that asks the wrong people is precisely wrong. A survey with ±10% error that asks the right people is approximately right.

If you are launching a premium dog food subscription, responses from 10,000 random adults are nearly worthless. Most do not own dogs. Of those who do, most do not buy premium. Of those who do, most have never considered a subscription. You are drawing conclusions from people who have no relationship to your actual customer.

Past Behaviour Predicts Future Behaviour

The strongest predictor of what someone will buy next is what they have already bought, in your category, at your price point. Not demographics alone. Not stated intent. Not attitudes. This is one of the most replicated findings in consumer psychology, and it is the reason targeting matters more than volume.

A panel of 100 respondents with verified purchase history in your product category will give you dramatically more predictive data than 10,000 respondents selected at random. The 100 are reasoning from experience. The 10,000 are speculating about a category they may never have spent money in.

Consider two surveys for a new £40/month meal kit service:

  • Survey A: 10,000 general population respondents. 42% say they would “probably” or “definitely” subscribe. This number is nearly useless; stated intent from people outside the category routinely overstates actual conversion by a large margin.
  • Survey B: 100 respondents who currently spend £150+/month on groceries, have tried at least one food subscription, and have household income above £60k. 23% say they would subscribe. This number is grounded in actual category behaviour and will track much closer to real conversion.

Survey B is not just smaller. It is asking a fundamentally different, and more useful, question: not “would the general public buy this?” but “would people who already spend in this category switch to this?”

What Precision Targeting Actually Changes

Signal Without the Noise

In a general population survey about a niche product, perhaps 5% of respondents are genuinely in your target market. The other 95% dilute your signal. With precision targeting, every response carries real information. You spend your analysis time on insight, not on filtering out irrelevant data.

Price Sensitivity That Reflects the Market

Willingness-to-pay data from people who do not buy in your category is fiction. They have no calibrated sense of what things cost. Respondents who regularly spend in your category have an intuitive reference point for value. Their price sensitivity curves reflect actual market dynamics, not uninformed guesses.

Objections You Can Act On

When a targeted respondent says “I would not buy this because I already get something similar from Brand X,” that is competitive intelligence you can use. When a random respondent says “not interested,” you have learned nothing you did not already know.

When Bigger Samples Do Matter

Large samples have legitimate uses. Market sizing across a general population requires broad coverage. Subgroup analysis across many demographic segments (age × income × region) needs enough respondents in each cell, though the solution is targeted oversampling, not blanket scale. Brand tracking for unaided awareness is inherently a population-level metric.

But these are not the questions most product teams are trying to answer most of the time. For concept validation, pricing, positioning, and competitive analysis (the daily work of product development), a smaller, precisely targeted panel will give you better data than a massive generic one.

What This Means in Practice

A synthetic panel built on verified purchase data lets you do something that traditional research makes prohibitively expensive: run every survey against respondents who actually buy in your category. Not a convenience sample that happens to be large, but a small sample selected because every respondent has a documented relationship with the products you compete against.

The next time someone tells you that you need 10,000 responses to make a confident product decision, ask a better question: responses from whom? If they cannot answer that with specificity, the number does not matter. One hundred responses from people who buy in your category will tell you more than ten thousand from people who do not.