Rensis
← Back to blog

Synthetic Research Isn’t a Shortcut — It’s a Different Kind of Evidence

Treating synthetic research as a cheap substitute for surveys misses the point. It is a different kind of evidence with its own strengths, limitations, and best-use cases.

The tempting pitch for synthetic consumer research is that it is faster, cheaper, and just as good as traditional methods. That framing sells, but it also sets up the wrong comparison. Treating synthetic research as a cheaper substitute for surveys is the fastest way to misuse it, and to be disappointed by the results.

Synthetic research produces a different kind of evidence. It answers different questions well, fails at different things, and belongs at a different point in the product development process. The teams getting real value from it are the ones who understand what it actually is, not the ones who treat it as a traditional survey with the cost removed.

Where Synthetic Research Is Strongest

Speed of Iteration, Not Just Speed of Delivery

The biggest advantage is not that you get results faster. It is that faster results let you test more hypotheses. Traditional research operates on a weeks-to-months cadence; by the time results come back, the product has often moved on. Synthetic panels let you explore a dozen variations of positioning, pricing, or audience targeting and converge on the strongest option before committing resources. The value is not in the individual result. It is in the ability to iterate.

Questions About Category Behaviour

Because synthetic personas are grounded in real purchase data, they are particularly good at answering questions about existing markets: who buys products like this, at what price points, what drives switching between brands. These are questions aboutpatterns in documented behaviour, and the underlying data is rich enough to produce useful answers.

Relative Comparisons Over Absolute Predictions

Synthetic panels produce their most reliable signal when comparing options. “Would consumers prefer Option A or Option B?” generates better data than “Would consumers buy Option A?” Forced trade-offs constrain the response space and reduce the inflation that plagues absolute purchase intent measures, in both synthetic and traditional research.

Filtering Before Expensive Fieldwork

The highest-ROI use case may be the simplest: screening concepts before committing to a fielded study. If a concept scores poorly with a synthetic panel grounded in category purchase data, it is unlikely to perform well with recruited respondents either. Using synthetic research to eliminate weak options before spending £15,000 on a traditional study is not cutting corners. It is spending the research budget on concepts that have already cleared a behavioural bar.

Where It Falls Short

Genuinely Novel Categories

Synthetic personas reason by analogy to existing purchase behaviour. When a product category has no close analogues in the data, the personas have less behavioural ground truth to work from. They will still generate responses (language models always do), but the further you get from documented purchase patterns, the wider the uncertainty around those responses. For genuinely unprecedented products, qualitative research with real people remains essential.

Sensory and Emotional Products

Products whose value depends on physical experience (fragrance, food, texture, the feel of a well-made object) have an obvious ceiling for synthetic evaluation. A persona can reason about price sensitivity and category affinity, but it cannot taste a recipe or feel a fabric. Synthetic research can inform pricing and targeting for these products, but should not be the sole input on whether the product itself works.

Absolute Intent Numbers

While synthetic panels are strong on relative rankings (“Concept A outperforms Concept B”), the absolute figures (“35% purchase intent”) should be read directionally, not literally. The calibration between synthetic intent and real-world conversion is not yet precise enough to anchor a financial model on a single number. This will improve as validation data accumulates, but today, treat absolute synthetic intent as a signal, not a forecast.

Social and Cultural Dynamics

Purchase behaviour is shaped by peer effects, cultural trends, and viral moments that historical transaction data cannot capture. A synthetic panel can tell you that your target audience has the income and category affinity for your product. It cannot tell you whether your product will catch a cultural tailwind or face a backlash. Neither can a traditional survey, for what it is worth, but it is important not to mistake behavioural grounding for omniscience.

The Validation Question

The most common challenge to synthetic research is: “How do you know the results are valid?” This is the right question, but it contains an ambiguity worth unpacking.

If “valid” means reliable (would you get similar results if you ran the same test again), synthetic panels score well. The same persona evaluating the same concept produces highly consistent responses. If “valid” meanspredictive (do the results forecast real-world outcomes), the honest answer is that the evidence is early. The methodology is grounded in behavioural data that has strong theoretical reasons to be predictive, and the relative rankings (which concept performs best) appear more robust than the absolute numbers (what percentage will convert). But synthetic consumer research is a young field, and the validation corpus is still being built.

This is not a disqualifying weakness. Every research methodology starts with theoretical plausibility and builds empirical validation over time. Traditional survey-based purchase intent took decades to develop the conversion norms that researchers now rely on. The question is whether the underlying logic is sound (grounding AI responses in documented purchase behaviour rather than general knowledge) and whether early results support it. On both counts, the case is strong enough to act on, while remaining honest about what has and has not been proven.

Synthetic and Traditional Together

The strongest research programmes will not choose between synthetic and traditional methods. The natural division of labour is straightforward: synthetic research for exploration, iteration, and pre-screening; traditional research for validation of high-stakes decisions. Use synthetic panels to narrow the option space and identify the strongest concepts, then confirm the finalists with recruited respondents where the cost is justified. After traditional research identifies a winning direction, synthetic panels can rapidly optimise messaging and targeting without additional fieldwork.

This is not a compromise. It is a research strategy that spends money where it produces the most information. The total cost is often lower than relying on traditional research alone, because pre-screening reduces the number of expensive fielded studies needed.

The Right Mental Model

Synthetic research is to traditional consumer research what a flight simulator is to flying. A simulator grounded in real physics data teaches you an enormous amount about how an aircraft behaves. Pilots train on simulators extensively because they are fast, repeatable, and safe to fail in. But no one certifies a new aircraft on simulator data alone, and no one confuses the simulator for the sky.

The useful framing is not “synthetic research is cheaper traditional research.” It is: synthetic research is a different instrument that measures different things, grounded in behavioural data that traditional surveys do not use. The teams that will get the most from it are the ones who understand what it measures well, what it does not, and where it fits alongside the research tools they already have.