Rensis
← Back to blog

Seven Survey Design Mistakes That Corrupt Your Data

Leading questions, double-barrelled items, and poor scales produce data that feels rigorous but leads you astray. Here are seven mistakes to avoid.

Surveys feel straightforward. Write some questions, send them out, count the answers. But survey design is full of traps that produce data which looks clean and meaningful while being systematically wrong. The errors are not random noise; they are directional biases that push results toward specific, incorrect conclusions. Here are seven of the most common mistakes, what goes wrong when you make them, and how to fix each one.

1. Leading Questions

A leading question contains information that pushes the respondent toward a particular answer. “How much did you enjoy our new, improved checkout experience?” presupposes that the experience was enjoyable and improved. The respondent has to actively resist the framing to give an honest negative answer, and most people will not bother.

The fix is to use neutral language that does not signal the desired response. Instead of “how much did you enjoy,” ask “how would you rate your checkout experience?” Remove adjectives that carry a positive or negative valence. Read every question from the perspective of a respondent who had a terrible experience and check whether the question makes it easy for them to say so.

2. Double-Barrelled Questions

A double-barrelled question asks about two things at once. “How satisfied are you with the price and quality of our product?” A respondent who is happy with the quality but unhappy with the price cannot answer accurately. They will either average their feelings (giving you a middling response that reflects neither dimension) or answer about whichever dimension feels more salient, leaving you with data that measures something you cannot identify.

The fix is simple: ask one question per concept. If you want to know about price and quality, those are two separate questions. Scan every question for the word “and.” If removing either half would change the meaning, you have a double-barrelled question.

3. Acquiescence Bias from Agree/Disagree Scales

Likert scales (strongly agree to strongly disagree) are the default in most surveys, and they introduce a well-documented bias: people tend to agree. This is called acquiescence bias. When presented with a statement like “this product meets my needs,” respondents are more likely to agree than disagree regardless of their actual experience. The effect is small per question but compounds across a survey, inflating positive results across the board.

The fix is to use item-specific scales instead of agree/disagree. Rather than “I am satisfied with this product: agree/disagree,” ask “how satisfied are you with this product: very dissatisfied to very satisfied.” The response options directly match the concept being measured, which reduces the pull toward agreement. For critical metrics, consider using forced-choice formats where respondents must pick between two concrete options.

4. Order Effects

The sequence in which you present questions and answer options affects responses. If you ask about a product’s strengths before asking about overall satisfaction, satisfaction scores will be higher than if you asked about weaknesses first. This is called a priming effect. Similarly, in a list of answer options, items presented first and last receive disproportionately more selections than those in the middle.

The fix is to randomise both question order and answer option order wherever the sequence is not logically dependent. Questions that must follow a specific flow (such as screening questions before detailed ones) are exempt, but any set of independent questions should be presented in random order. For answer options, randomise the list for each respondent. This does not eliminate order effects, but it distributes them randomly rather than letting them bias results in one direction.

5. The Problem with Hypothetical Questions

“Would you buy this product?” is perhaps the most asked and least useful question in consumer research. People are poor predictors of their own future behaviour, especially for purchases. They overestimate their willingness to buy novel products, underestimate the friction of switching from their current solution, and ignore the budget constraints they will face when the moment of purchase actually arrives. Hypothetical purchase intent typically overstates real demand by a factor of three to five.

The fix is to anchor questions in concrete, recent behaviour rather than hypothetical futures. Instead of “would you buy this,” ask about current spending patterns, current dissatisfactions, and past switching behaviour. These are factual questions with verifiable answers, and they predict future behaviour far more reliably than stated intentions. When you must ask about future intent, use calibrated scales and apply known correction factors to the results.

6. Poor Answer Scales

Scales with too few options compress meaningful variation. A yes/no question about satisfaction forces people who are mildly satisfied and people who are enthusiastic into the same category. You lose the ability to distinguish between customers who will stay and customers who will advocate. Conversely, scales with too many options (such as a 1–100 slider) introduce meaningless precision. The difference between a 72 and a 74 satisfaction rating is noise, but the data will tempt you to treat it as signal.

The fix is to use five to seven-point scales for most attitudinal questions. This range captures meaningful variation without false precision. Label every point on the scale, not just the endpoints; respondents interpret unlabelled midpoints inconsistently. For behavioural frequency questions, use concrete ranges (once a week, two to three times a month) rather than vague labels (frequently, sometimes, rarely), which mean different things to different people.

7. Survivorship Bias in Sample Selection

If you survey your current customers about what they value in your product, you will get an accurate picture of whatretained customers value. You will learn nothing about why people left, why prospects chose a competitor, or why some segments never considered you at all. This is survivorship bias: your sample contains only the winners, and their feedback will reinforce your current strategy regardless of whether that strategy is working for the broader market.

The fix is to deliberately include non-customers and former customers in your research. Survey people who evaluated your product and chose a competitor. Survey people who cancelled. Survey people in your target category who have never heard of you. Their feedback is less comfortable but far more useful for identifying growth opportunities. Purchase behaviour data can help here by identifying respondents who buy in your category but not from you, giving you a panel of relevant non-customers whose perspective would otherwise be invisible.