NPS is like the SATs of the business world: get a high score, and you’re bound for a successful future; get a low score, and no one will give you a chance. Much like nailing the SATs, you have to be well-prepared with your NPS survey in order to get the best results. That’s why knowing NPS survey best practices is essential to getting the most out of your NPS efforts.

An NPS, or a net promoter score, is a gauge of how likely your customers are to recommend you to others. It’s become somewhat of a crystal ball in customer service, as one of the best ways to predict the future success of a company. By getting customers to fill out a simple “how likely are you to recommend us on a scale of 1 to 10″ survey, you can get insight into how well your company is doing, and how likely it is to succeed in the future.

It sounds and is quite simple, but it’s also pretty easy to mess it up. You need to make sure that you’re asking the right customers at the right time, or your NPS score will have about as much predictive power as a fortune cookie.

I spoke with two companies whose experiences with NPS started off a bit rocky. Below, I’ll go through each case, what they learned, and then outline some NPS survey best practices so that you don’t have the same NPS struggles.

Case #1: Planning Pod

Jeff Kear, co-founder of event management solution Planning Pod, recalls his company’s experience with NPS and how they weren’t surveying some of their most loyal clients.

The story

“When we first started measuring NPS, we sent emails to our users asking them to click on a link to provide their responses to a few survey questions, the first of which was the NPS question.

“We were a bit surprised to find this response to be lower than we expected, especially when measuring it against benchmarks for other software and tech providers. We were especially surprised to find lower NPS ratings from long-time customers, many of whom are still using our product to this day.

“After some deeper digging and phone interviews, we found that our email request was self-selecting people who were not happy with our product (0-6 rating) or mostly happy with our product (7-8 rating) but had a bunch of feedback or suggestions, and that people who loved our product (9-10 rating) were declining to start the survey. We weren’t getting an equitable cross-representation of all our customers, and the ones who were responding didn’t seem to be self-reporting their satisfaction level consistently.”

The challenge

The biggest challenge for Planning Pod was actually getting their most loyal customers to respond. Because their less happy customers were more eager to share their opinions than their more happy ones, they were getting skewed results that weren’t representative of actual customer opinion.

Lessons learned

“What we did was add another question to our survey that we hoped would provide more clarity from these customers. This question is one that companies use to determine product/market fit, and it asks “How would you feel if you could no longer use Product Z?” The answer options are very disappointed, somewhat disappointed, not disappointed or not applicable. If 40 percent or more of respondents select “very disappointed”, then you have a good market fit.

“When asking this question alongside the NPS score, we started getting a better sense of whether our product matched our user’s needs/expectations and what we were getting wrong so that we could address it.

“My takeaway from this was twofold. First, you should present your NPS score in tandem with another metric that gives you a more rounded sense of customer satisfaction and market fit, and second, you should provide a way for customer to respond to NPS in a variety of ways – in app, via email or even over the phone or live.”

Case #2: Veriquest Partners

Jody Schrant, former customer experience executive in telecommunications and currently a strategy consultant and executive coach with Veriquest Partners, told us about leadership demands on their NPS scores, and the disparity between that and other CS metrics.

The story

“At my company, we conducted monthly randomized NPS surveys in our six primary markets, and we tracked the NPS scores monthly for the enterprise. In any one given month, the NPS might shift upward or downward by anywhere from 1-5 points. If it went down, the senior leadership would demand research into what led to the drop. If it went up, leadership celebrated.

“These same surveys inevitably had substantial differences by our markets, even though our other customer experience metrics like CSAT didn’t follow. In some cases, the market differences varied by 20 points. However, we didn’t note any substantial differences in other operational metrics, or meaningful differences in how the market performed financially.”

The challenge

For Veriquest Partners, the company had problems with fluctuating NPS scores, both within the same and different markets, which didn’t align with other customer experience metrics that they were tracking.

Lessons learned

“It took me years to be able to ‘train’ my leaders that a) there was a margin of error and b) that the unusual statistical model of NPS left the margin of error a little worse off. Ultimately, I got them to agree that if month to month, we saw a 3 point or less variance, that it was effectively statistically insignificant. If we saw sustained trend lines showing 3 point variance over time, we had something we needed to research.

“In the end, we were left to speculate that the nature of the NPS question, which predicts future behavior versus grading past experience, had region-specific understanding. In essence, some regions were ‘more friendly’ than others.

“Despite the shortcomings with NPS, with the right guardrails and understanding of its boundaries, the metric can be a very effective measuring stick for a company, particularly when it is used in annualized comparisons against competitors with benchmarking groups.”

NPS survey best practices

Both Planning Pod and Veriquest Partners had different experiences with NPS, but each company realized where they were going wrong and were able to adjust their strategy in order to get more accurate NPS survey results.

What can we learn about NPS survey best practices from Planning Pod and Veriquest Partners?

1. Measure often

How often you’re surveying your customers will have an impact on your NPS scores. Like Veriquest Partners realized, monthly surveys may have been too frequent given such fluctuating scores, which put greater demands on their reporting.

Measuring often is important, because scores can change as your customers move through the buyer journey, but over-measuring can be detrimental to your NPS strategy if you’re trying to find some consistency. Having a regular cadence, like quarterly, will show more natural change in scores.

2. Timing is everything

When you send your NPS survey can make all the difference. If you send it too soon, your customers haven’t had a chance to experience your product. If you send it too late, your customers won’t remember it. Finding the right time in your buyer journey to send your NPS survey, based on triggers like purchase time or frequency, will help you get a more accurate NPS score.

3. Run the survey in multiple places

As Planning Pod realized, having various places to send your NPS survey also makes a difference. The two most common places for sending out NPS surveys are in-app and email. Depending on where your customers are in the buyer journey, it’s better to ask in one place or the other. It’s also good for following-up with customers who haven’t answered in one of the channels– this covers more of your bases by finding customers in their preferred setting.

4. Account for cultural differences

Cultural differences don’t stop at food or fashion choices– they extend to the NPS survey too. As Veriquest Partners noted, they saw varying scores across markets, even if indicators in other areas were showing the same type of financial performance.

In some cultures, a score of 7 might represent the same sentiment as a score of 9 in another culture. Knowing your audience and the way that they interpret an NPS survey scale will help you make better judgements about your customer’s level of satisfaction.

5. Set your own benchmarks

There’s no such thing as one “good” NPS score. Industry benchmarks are a good starting point, but take these with a grain of salt– you don’t know the survey conditions around another company’s NPS scores. The best NPS benchmark you can set is against yourself.

Look at your historical NPS scores to see how they’re improving over time. That will give you the greatest indicator of how well your company is performing. As both Planning Pod and Veriquest Partners did, it’s also good to look at NPS scores in context along with other factors and results, instead of as a standalone indicator of performance.

Nail your NPS

NPS may not be a crystal ball into the future of your business, but it does give you a good idea of how well you’re doing, and how well you’re likely to do, in the future. By following some best practices, you’re bound to get the most accurate NPS score that will be able to reflect your company’s current and future performance. These include:

  • Finding the right cadence for sending your NPS survey.
  • Sending your NPS survey at the right time during the purchasing journey.
  • Distributing your survey in multiple places, especially via email and in-app.
  • Taking into account cultural differences for differences in NPS scores across regions or markets.
  • Setting benchmarks against your own company to get the best idea of how your are improving over time.

If you’re ready to get started with NPS: