I’m not a fan of NPS, for the reasons I’ll expose later in this article. It probably suffers from its fame and the common consequent pitfall where teams would rush into using it without thinking it through.
That being said, you can end up having to collect that metric for a legitimate reason, or because of a dangerous animal.
The first part of this article isn’t news – you’ll find such definitions all over the web. The second part is about how I came to implement an NPS scoring solution in a past experience.
What is NPS?
The Net Promoter Score is a high-level metric that represents the level of satisfaction your users have with your product. It is that pop-up (often) that you see on many sites and apps asking you to give a 0 to 10 grade.
When a user gives a grade, they’re then categorized as “Detractor”, “Passive” or “Promoter”. Any grade below 7 means the user doesn’t enjoy your product enough, and only grades above 8 consider they do. That sets the bar quite high.
Once grades are collected, they are compiled into the actual NPS. It is the percentage of Promoters, minus the percentage of Detractors.
The value is usually written as an integer, between -100 and +100.
One key thing to note, that I’ve seen ignored too many times: an NPS grade should be requested to users on a regular basis. It’s key to understand if users change their minds.
What are its flaws?
First of all, I’d ask the standard product manager question. What is the problem that you are trying to solve? I’m not so much interested in the idea that you had, that NPS would be the solution.
Is it for reporting to boards and higher-ups? As I’ll describe later, that will be a poor metric.
Is it to actually understand what your users think? It will never replace a good one-to-one conversation.
If the world has shown us one thing in the past two years, it’s that you can make of metrics and values anything you want. The same metric and value can be used with totally opposite arguments. And metrics and values can be faked, and biased.
Like any metric, the NPS values should be looked at cautiously. What does a negative value mean? What conclusion could you draw from that? None: you just have one number as a piece of information. Even if you look at the NPS trend, rather than absolute value: is the user satisfaction decreasing because your product is getting worse, or because you suddenly reached out to the demography of users who was more inclined to give lesser grades?
Actually, do you want to target all of your users, or segment them?
That last question is a good intro to this paragraph. Here’s a list of things that can completely bias your result, and render them inaccurate and not usable at all:
- Free users (if you have some) don’t perceive your product the same way as paid users. They most likely don’t use the same features and don’t feel as involved in your product as paid ones.
- Your mom, best friend, or existing champion customer, should they give a grade? Ethically, I’d tend to say no for obvious reasons. And still…
- Collecting an NPS grade from users via the UI is highly damaging to your user experience. It doesn’t answer any of your user’s problems, and shouldn’t belong to the UI. Worse, the popularity of NPS causes its doom, as it increases the number of users frustrated by this user experience interruption, as they keep on being asked that question in all products they use. They might just give a 0 just to click that bugging thing away.
- Not collecting an NPS grade from the UI leads to a risk that the user isn’t mentally focusing on your product at that moment, and potentially responds inaccurately.
- As much as you want users to be able to update the grade they gave you at a regular interval, you can end up with some “spam” due to users inadvertently scoring via multiple channels or multiple user accounts.
- The previous point would encourage you to “clean up” your responses database, but this will inevitably introduce further bias and decrease the trust in that metric.
- More importantly: how many users gave you a grade? Is your sample size statistically relevant? By the way, how to calculate what the relevant sample size would be? Should it be an absolute, arbitrary value, or a percentage of your users?
How to set this up
So, if you’ve gone this far reading, you’re probably still interested in collecting that score. Let’s assume you have an actual purpose and can overcome bias, and let’s see what’s key:
- You want to reach out to a maximum amount of your users;
- You want to give them multiple chances to answer (sometimes, it’s just not the right moment);
- You want to primarily have them answer via your UI (despite the pitfall described earlier);
- You want to offer them the possibility to answer via an email reminder;
- You want them to be able to justify their answer with a free text;
- You want to be able to ask them to refresh their grade after a while;
- You want to be able to get back to them and engage in a conversation;
- You likely want to watch out for confidentiality, GDPR, CCPA, and equivalents;
- You want to be able to segment the results.
- A tool that has access to all of your user base, with a list of user, attributes that will help you segment;
- A tool that enables you to set custom communication workflows, so that you can configure reminders, follow-ups, etc, ideally sending messages in-app and via email;
- A web UI where users will be pointed from email reminders, so they can give a grade and a free text justification
- A tool that displays the results and lets you browse through them.
A “no-code” solution
I’ve roamed over the market; there are countless tools that let you collect NPS from your users. In practice, it depends if that’s your only need, or if you’re interested in all of the other bells and whistles vendors can sell.
In my specific situation, I had additional constraints. I wanted to set it up all on my own. No bugging of the dev team, far too busy with our roadmap. Low budget. No bugging of other teams (support, customer success).
We were already using Intercom. It is brilliant at covering points 1 and 2 above.
After benchmarking many of the NPS solutions integrations they offer via their marketplace, I selected Survicate. It was doing almost exactly what I needed. Essentially, it was covering points 3 and 4. Back then, the only thing I was missing was more flexibility in response segmentation, and to have a single database for collecting in-app and email responses. I must say I appreciated a lot that their team took the time to hop on a call with me so that I could tell them my “problem”. I’m confident they can work on that, if they haven’t, yet.
Note that this isn’t entirely “no-code”, as there was already some code implemented to integrate our product and Intercom.
“Detractor” = opportunity!
Where I find great value in NPS, you understood, is not in the metric and score itself.
Where Promoters should become actual promoters, in the sense that you should encourage them to share their love for your product in any meaningful ways (ratings on analysts’ sites, social media, word-of-mouth,…), I believe that Detractors are an amazing opportunity.
If they legitimately give you a “bad” grade, there’s probably a lesson for you. You should make sure to personally reach out to all of those, and try to get on a call with as many of them as possible. Do you have a usability problem? A market-fit problem? Anything else?
The solution “delivered”. Most importantly, it did trigger interesting conversations with customers, and we did get promoters to speak out.
For having spent quite a while on the topic, there are things I can recommend. Don’t reinvent the wheel. Don’t try to do this by developing an in-house tool. Don’t hope you can sort it out with an email campaign and an Excel sheet.
In the build or buy question, the answer is obvious. Focus on building your own value.