Rensis
← Back to blog

What NPS Doesn’t Tell You

Net Promoter Score is simple and popular. It is also a poor predictor of growth, churn, or what to do next. Here is what it misses.

Net Promoter Score has become the default metric for customer satisfaction. It is tracked on dashboards, reported to boards, and used to benchmark against competitors. Its popularity is deserved in one narrow sense: it is simple. One question, one number, easy to trend over time. But simplicity is also its fundamental limitation. NPS tells you far less than most teams believe, and the gap between what it measures and what people think it measures leads to consistently poor decisions.

Why NPS Became the Default

NPS succeeded because it solved an operational problem, not an analytical one. Before NPS, customer satisfaction required long surveys with dozens of questions, producing reports that took weeks and were difficult to act on. Fred Reichheld’s proposition was appealing: one question (“How likely are you to recommend this product to a friend or colleague?”) could predict growth. Companies adopted it because it was easy to implement, easy to track, and easy to communicate to leadership.

The original research linking NPS to revenue growth has not held up well under scrutiny. Subsequent studies have found that NPS is not a reliable predictor of company growth, and that other satisfaction methodologies predict revenue and retention at least as well. But by the time this evidence accumulated, NPS was entrenched. Organizations had built entire programs around it, and switching costs were high.

What NPS Actually Measures

NPS measures stated likelihood to recommend. It does not measure satisfaction, loyalty, or likelihood to repurchase. These are related but distinct. A customer might be satisfied with your product but never recommend it because the category is not something people discuss. Accounting software generates low NPS not because users dislike it but because nobody recommends accounting software at dinner.

The scoring methodology introduces its own distortions, one of the more common survey design mistakes in customer research. NPS groups respondents into Promoters (9-10), Passives (7-8), and Detractors (0-6). A score of 6 is treated identically to a score of 0, despite representing very different levels of dissatisfaction. A score of 7 is ignored entirely. The collapse of an 11-point scale into three buckets discards most of the information the question collects. Two companies can have identical NPS with completely different underlying distributions, and those distributions matter far more than the headline number.

The Recommendation-Behavior Gap

The deepest problem with NPS is the assumption that recommendation intent predicts recommendation behavior, and that recommendation behavior predicts growth. Both links are weak.

People who say they would recommend a product often do not. The gap between stated versus revealed preference is well documented across consumer research. Saying you would recommend something costs nothing. Actually doing it requires a specific conversational context, a friend with a relevant need, and enough conviction to stake your social capital on the suggestion. Most Promoters never promote anything.

Even when recommendations happen, they are one of many growth drivers. Paid acquisition, organic search, content marketing, and distribution all contribute independently of word-of-mouth. A company with low NPS can grow rapidly through strong distribution. A company with high NPS can stagnate if its category has low word-of-mouth dynamics. The score and the growth rate are connected loosely at best.

Cultural Bias Makes Benchmarking Unreliable

NPS scores are not comparable across cultures. Japanese respondents consistently score lower than American respondents across all categories, not because Japanese products are worse but because the cultural norm around top-box scoring differs. In many East Asian cultures, giving a 10 out of 10 is considered excessive. In the United States, satisfied customers routinely give 9s and 10s.

A Japanese company with an NPS of 20 might have genuinely more satisfied customers than an American company with an NPS of 50. The metric is not calibrated for this, and most organizations do not adjust. Teams comparing NPS across regions are comparing numbers that do not mean the same thing.

Behavior Tells You What NPS Cannot

The question NPS is trying to answer, “are our customers healthy and will they drive growth?”, is a behavioral question. It is better answered by behavioral evidence than by a single stated-intent metric.

Repeat purchase rate, feature adoption, engagement frequency, and churn timing tell you what customers actually do, not what they say they might do. These metrics are harder to collect but far more predictive. A customer who repurchases monthly is demonstrating loyalty through action. A customer who gives you a 9 on NPS but has not logged in for six weeks is telling you something the score obscures.

Using purchase behavior as a better predictor extends this logic to prospective customers, not just existing ones. If you want to understand whether consumers in your target segment are likely to buy, stay, and expand, their documented spending patterns in your category are a stronger signal than any stated-intent metric. What they have bought, how often they switch, and what price points they accept are behavioral facts. NPS is an opinion collected after the purchase. The behavior that preceded it is more informative than the score that followed.

Using NPS Without Being Misled

None of this means you should stop collecting NPS. It means you should stop treating it as a reliable predictor of growth, a valid cross-cultural benchmark, or a sufficient measure of customer health. NPS is useful as a rough directional signal tracked over time within a single market. If your score drops 15 points in a quarter, something has gone wrong and you should investigate. But the score will not tell you what went wrong or what to do about it.

The danger is not in collecting NPS. It is in building strategy around it. Teams that optimize for NPS tend to optimize for the score rather than the underlying experience. They chase detractors with apology emails rather than fixing the structural problems that created dissatisfaction. They celebrate high scores without investigating whether those scores translate into retention or revenue. The metric becomes the goal, and the goal stops reflecting reality.

Customer understanding requires more than a single number. It requires understanding behavior, context, and motivation. NPS captures none of those. Treat it as one input among many, weighted for its limitations, and it serves a purpose. Treat it as the answer, and it will quietly mislead you.

This site uses cookies. By continuing to use this site you agree to our Privacy Policy and Cookie Policy.