What NPS Doesn’t Tell You
Net Promoter Score is simple and popular. It is also a poor predictor of growth, churn, or what to do next. Here is what it misses.
Net Promoter Score has become the default metric for customer satisfaction. It is tracked on dashboards, reported to boards, and used to benchmark against competitors. Its popularity is deserved in one narrow sense: it is simple. One question, one number, easy to trend over time. But simplicity is also its fundamental limitation. NPS tells you far less than most teams believe, and the gap between what it measures and what people think it measures leads to consistently poor decisions.
Why NPS Became the Default
NPS succeeded because it solved an operational problem, not an analytical one. Before NPS, customer satisfaction was measured through long surveys with dozens of questions, producing reports that took weeks to compile and were difficult to act on. Fred Reichheld’s original proposition was appealing: one question (“How likely are you to recommend this product to a friend or colleague?”) could predict growth. The claim was bold and the methodology was simple. Companies adopted it because it was easy to implement, easy to track, and easy to communicate to leadership.
The original research linking NPS to revenue growth was persuasive but has not held up well under scrutiny. Subsequent studies have found that NPS is not a reliable predictor of company growth, and that customer satisfaction scores from other methodologies predict revenue and retention at least as well, and often better. But by the time this evidence accumulated, NPS was already entrenched. It had become the metric, and organisations built entire programmes around it.
What NPS Actually Measures
NPS measures one thing: stated likelihood to recommend. It does not measure satisfaction, loyalty, or likelihood to repurchase. These are related but distinct concepts. A customer might be satisfied with your product but never recommend it because the category is not something people discuss. Accounting software, for example, generates low NPS scores not because users dislike it but because nobody recommends accounting software at dinner parties.
The scoring methodology introduces its own distortions. NPS groups respondents into Promoters (9–10), Passives (7–8), and Detractors (0–6). A score of 6 is treated identically to a score of 0, despite representing very different levels of dissatisfaction. A score of 7 is ignored entirely. The collapse of an 11-point scale into three buckets discards most of the information the question collects. Two companies can have identical NPS scores with completely different underlying distributions, and those distributions matter far more than the headline number.
The Recommendation-Behaviour Gap
The deepest problem with NPS is the assumption that recommendation intent predicts recommendation behaviour, and that recommendation behaviour predicts growth. Both links are weak.
People who say they would recommend a product often do not. The gap between stated intent and actual behaviour is well documented across consumer research. Saying you would recommend something costs nothing. Actually recommending it requires a specific conversational context, a friend with a relevant need, and enough conviction to stake your social capital on the suggestion. Most “Promoters” never promote anything.
Even when recommendations do happen, they are only one of many growth drivers. Paid acquisition, organic search, content marketing, partnerships, and distribution deals all contribute to growth independently of word-of-mouth. A company with a low NPS can grow rapidly through strong distribution. A company with a high NPS can stagnate if its category has low word-of-mouth dynamics.
Cultural Bias in Scoring
NPS scores are not comparable across cultures, which makes international benchmarking unreliable. Japanese respondents consistently score lower than American respondents across all categories, not because Japanese products are worse but because the cultural norm around top-box scoring differs. In many East Asian cultures, giving a 10 out of 10 is considered excessive or inappropriate. In the United States, satisfied customers routinely give 9s and 10s.
This means a Japanese company with an NPS of 20 might have genuinely more satisfied customers than an American company with an NPS of 50. The metric is not calibrated for this, and most organisations do not adjust for it. Teams operating across multiple markets and comparing NPS across regions are comparing numbers that do not mean the same thing.
What Provides More Actionable Insight
The problem NPS tries to solve, understanding customer sentiment and predicting commercial outcomes, is real. The metric is just too blunt to solve it well. Several alternatives and complements provide richer signal.
- Customer satisfaction (CSAT) at specific touchpointsmeasures sentiment where it matters, after support interactions, after onboarding, after key feature usage. This tells you where to improve, not just whether customers are vaguely happy.
- Customer Effort Score (CES) measures how easy it was to accomplish a specific task. High effort predicts churn more reliably than low satisfaction does. People leave products that are hard to use before they leave products that are merely imperfect.
- Behavioural retention metrics like repeat purchase rate, feature adoption, and engagement frequency tell you what customers actually do, not what they say they might do. These are harder to collect but far more predictive.
- Open-ended qualitative research surfaces the “why” behind the score. A number without context is noise. Understanding why a customer is dissatisfied, what specific problem they encountered, what alternative they are considering, is what makes the data actionable.
Using NPS Without Being Misled by It
None of this means you should stop collecting NPS. It means you should stop treating it as a reliable predictor of growth, a valid cross-cultural benchmark, or a sufficient measure of customer health. NPS is useful as a rough directional signal when tracked over time within a single market. If your NPS drops 15 points in a quarter, something has gone wrong, and you should investigate. But the NPS score itself will not tell you what went wrong or what to do about it.
The danger is not in collecting NPS; it is in building strategy around it. Teams that optimise for NPS tend to optimise for the score rather than the underlying experience. They chase detractors with apology emails rather than fixing the structural problems that created dissatisfaction. They celebrate high scores without investigating whether those scores translate into retention or revenue. The metric becomes the goal, and the goal stops reflecting reality.
Customer understanding requires more than a single number. It requires understanding behaviour, context, and motivation. NPS captures none of those. Treat it as one input among many, weighted appropriately for its limitations, and it serves a purpose. Treat it as the answer, and it will quietly mislead you.