Seven Survey Design Mistakes That Corrupt Your Data
Leading questions, double-barrelled items, and poor scales produce data that feels rigorous but leads you astray. Here are seven mistakes to avoid.
Surveys feel straightforward. Write some questions, send them out, count the answers. But survey design is full of traps that produce data which looks clean and meaningful while being systematically wrong. The errors are not random noise. They are directional biases that push results toward specific, incorrect conclusions. Some of these mistakes are well-known. Others are so embedded in standard practice that teams make them deliberately.
Asking About the Future Instead of the Past
“Would you buy this product?” is the most asked and least useful question in consumer research. People are poor predictors of their own future behavior. They overestimate their willingness to buy novel products, underestimate the friction of switching from what they use now, and ignore the budget constraints they will face when the moment of purchase arrives. Hypothetical purchase intent routinely overstates real demand.
The fix is to anchor questions in concrete, recent behavior. Instead of “would you buy this,” ask about current spending patterns, current dissatisfactions, and past switching behavior. These are factual questions with verifiable answers, and they predict future behavior far more reliably than stated intentions. When you must ask about future intent (and sometimes you must), use calibrated scales and never take the raw numbers at face value.
This is the single most consequential survey design decision: whether your questions generate stated preferences or elicit behavioral facts. Understanding the gap between stated intent and real behavior is fundamental. A survey built around “would you” and “how likely are you to” will produce inflated, unreliable data regardless of how well the rest of the survey is designed. A survey built around “what do you currently spend” and “when did you last switch” produces data you can act on.
Surveying Your Customers and Calling It Market Research
If you survey current customers about what they value in your product, you get an accurate picture of whatretained customers value. You learn nothing about why people left, why prospects chose a competitor, or why some segments never considered you. This is survivorship bias: your sample contains only the winners, and their feedback reinforces your current strategy regardless of whether that strategy is working for the broader market.
The fix is to deliberately include non-customers and former customers. Survey people who evaluated your product and chose a competitor. Survey people who canceled. Survey people in your target category who have never heard of you. Their feedback is less comfortable and far more useful for identifying growth opportunities.
Purchase behavior data helps here specifically. You can identify respondents who buy in your category but not from you, giving you a panel of relevant non-customers whose perspective would otherwise be invisible. These are not random people who happen to not be customers. They are people who spend in your category, at your price tier, and chose something else. Their objections are the ones that matter.
Testing the Concept Without a Price
Concept tests that omit the price produce meaningless purchase intent data. People will express interest in almost anything if it appears to be free. The moment you attach a price, intent drops, and it drops unevenly across segments. A concept that scores 60% intent without a price might score 25% at $29/month and 40% at $19/month. Without the price in the stimulus, you have no idea which of those realities you are looking at.
Always include the price in concept tests. If you are unsure of the price, test multiple price points as separate cells. The price is not a detail to be filled in later. It is one of the primary variables the respondent is evaluating, and removing it invalidates the intent measurement.
Leading the Respondent to Your Preferred Answer
A leading question contains information that pushes toward a particular answer. “How much did you enjoy our new, improved checkout experience?” presupposes that the experience was enjoyable and improved. The respondent has to actively resist the framing to give an honest negative answer, and most will not bother.
The fix: use neutral language that does not signal the desired response. “How would you rate your checkout experience?” Read every question from the perspective of a respondent who had a terrible experience and check whether the question makes it easy for them to say so. This applies to concept descriptions too, not just questions. An oversold product description in a concept test will inflate intent scores in ways that are indistinguishable from genuine demand.
Asking Two Questions Disguised as One
“How satisfied are you with the price and quality of our product?” A respondent happy with quality but unhappy with price cannot answer accurately. They will either average their feelings (giving you a middling score that reflects neither dimension) or answer about whichever dimension feels more salient. Either way, you have data that measures something you cannot identify.
One question per concept. Scan every question for the word “and.” If removing either half would change the meaning, split it into two questions.
Using Agree/Disagree for Everything
Likert scales (strongly agree to strongly disagree) are the survey default, and they introduce acquiescence bias: people tend to agree. “This product meets my needs: agree/disagree” will skew positive regardless of actual experience. The effect is small per question but compounds across a survey, inflating positive results systematically.
Use item-specific scales instead. “How well does this product meet your needs: not at all to completely” measures the same concept without the agreement pull. For critical decisions, forced-choice formats (“which of these two options would you prefer?”) produce more realistic data because they require trade-offs rather than blanket agreement.
Ignoring Who Is Answering
Most survey design advice focuses on the questions. The more consequential design decision is who answers them. A perfectly designed survey sent to the wrong audience produces precisely wrong results. “Adults aged 25 to 45” is not a meaningful sample for most product decisions. The respondents who matter are the ones whose purchase behavior makes them plausible customers: people who buy in your category, at your price tier, with sufficient frequency to represent real demand.
This is the mistake that survey design guides rarely cover, because it sits outside the scope of questionnaire construction. But a well-designed survey sent to a behaviorally irrelevant audience will mislead you more reliably than a mediocre survey sent to the right people. That is why audience precision matters more than sample size. The panel is not a detail. It is the single most important design decision, and it should be the first one you make — which means starting with a clear research brief that defines your audience before anything else.


