DYK # 24: Did you know that tNPS is harming your insights quality!?
In this month’s “Did you know…” (DYK) we explore a topic that’s slightly controversial, but nonetheless an important topic to discuss; the topic of tNPS.
Let’s start at the beginning: what is NPS?
NPS (Net Promoter Score) is one of the top 3 metrics used in CX, alongside CSAT (Customer Satisfaction) and CES (Customer Effort Score).
You’re probably familiar with the well known question that makes up NPS: “How likely are you to recommend XXX to friends or family?”, based on a 0-10 scale.
NPS is a holistic, long-term brand health metric that measures loyalty, and determines which customers are brand ambassadors.
It is a valuable measure on a strategic level as it measures brand strength and loyalty, and due to its standardized & rigid nature, as well as its wide adoption, it’s a great benchmark metric for companies to compare themselves against the competition.
So what is tNPS then?
The “t” in front of the NPS stands for “transactional” (sometimes referred to as touchpoint or post-interaction NPS). It refers to an NPS that was collected in a transactional / post-interaction environment.
In short, we have different kinds of surveys we can use to gain insights from customers on various aspects of their relationship and interactions with our brand / company.
From brand (market) surveys, to relationship surveys, to journey surveys, all the way to the very specific post-interaction surveys.
Transactional / post interaction surveys are typically surveys you receive after:
A call to a call centre
A store visit / purchase
While browsing online (digital intercepts)
And this is where it becomes interesting.
Problem # 1: The likelihood to recommend a brand isn’t based on a single interaction.
Many organisations use tNPS to track and measure customer feedback after a specific interaction. We’re asking customers to recommend a brand, based on a single interaction.
While this provides us with customer feedback and a number to measure and track (tNPS), it is a somewhat flawed approach, as the likelihood to recommend a brand isn’t based on a single interaction.
The likelihood to recommend a brand is based on a history of interactions, touchpoints, word of mouth, product usage, potentially brand reputation, etc. Those form a brand perception and a likelihood to recommend that brand in customer’s minds.
When customers think about recommending a brand, it’s typically the “big ticket items” that drive that recommendation, things like price, quality, convenience, brand reputation, etc. While customer service can be one of those “big ticket items”, it’s not typically one single interaction that determines the likelihood to recommend that brand. It might make it more or less likely though, meaning every interaction has an impact on the overall likelihood to recommend a brand (which is not the same as NPS).
In short:
Problem # 2: tNPS is business centric, not customer centric.
Secondly, asking the NPS question in an interaction environment often simply doesn't make sense. It feels out of place and irrelevant to the customer at that point in time. That’s because it’s a business centric question, not a customer centric question (more on that in the example below.)
The reason many organisations still use tNPS is typically due to a directive from the top to “measure” NPS across the board; unfortunately that includes situations where it doesn't make sense to use NPS.
Problem # 3: tNPS is asking 2 things in 1 question, making it hard to provide a score.
In fact, we’re asking two things in one question, and that makes it hard to answer the question. We’re asking:
How did the interaction go?
How likely are you to recommend the brand?
Those two aren't necessarily related and it makes it difficult to answer that / those questions.
What we see often happening these days is that customers interpret these silly tNPS questions as “rate your experience”. In which case you get a rating, but that’s not what NPS is.
Many years ago I came across a quote from Fred Reichheld from a Wall Street Journal article that sums it up nicely:
“I had no idea how many people would mess with the score to bend it, to make it serve their selfish objectives”.
We’ve got some examples for you below to illustrate the point.
Example:
I recently did some online banking and, surprise, was presented with a tNPS survey.. Curious as I am, I checked it out and that means we can now use it as a postmortem example.
I actually wasn’t sure how to answer the question, as this particular experience really had nothing to do with me recommending my bank.
Why would I recommend my bank based on a single online transaction?
As mentioned, I think about “big ticket items” when I think about recommending a bank. Things such as rates, services, reputation, and perhaps the quality of their app / digital offering in general. But not for a particular interaction.
The bank would have been much better off by asking me:
How easy it was to complete my online transaction (CES)
How satisfied I was with my online banking experience today (CSAT)
These questions / metrics are relevant to the experience and will provide you with meaningful and actionable insights.
Instead of getting a “5” rating for NPS (as I wasn’t sure how to answer that question..), the bank would have learned:
CES - it was very easy for me to complete my online transaction
CSAT - I was happy with my online banking experience
Instead, the bank now thinks I’m a “detractor”, simply because they asked the wrong question.
CX Metrics and KPIs
While NPS is often used as an overall brand health measure, CSAT and CES are more specific measures that can be adapted to a given situation, e.g., product reviews, service interactions with call centre agents/website interactions/store visits, etc., leaving you with more valuable feedback.
Same goes for the contact centre and the post-call survey. Questions such as “Based on the call you had with <insert agent name>, how likely are you to recommend us to friends and family?”. It becomes particularly challenging if you KPI agents on tNPS, as they’re not the sole driver of NPS, and their ability to impact a holistic, long-term brand health loyalty metric (NPS) is limited. And since NPS isn’t measuring the impact that particular conversation with the agent had on your likelihood to recommend that brand, it’s a KPI that’s not suited for contact centre agents.
For another example of the use of tNPS in the call centre environment, check out one of our very first articles from 2020 “VoC – The journey of customer feedback and the origin of the touchpoint NPS oxymoron”.
So here’s a tip for you: Choosing the right metric is important to produce high quality insights.
With that being said, I’d like to ask you:
“Which metrics are you currently using in your organisation, and do the questions you ask actually make sense?”
Why does it matter?
The result: Garbage in - garbage out.
The quality of your customer insights depends on the quality of the data you analyse. Asking the wrong questions and using the wrong metrics will impact your insights quality - even if you have the best technology in place!
A word on unsolicited feedback and the role of metrics.
Especially in the case of the post call survey, we can now supplement, or even replace surveys using unsolicited feedback, i.e. analysing the contact centre conversation itself. Instead of sending a post-call survey, we analyse the call itself to gain deeper insights. But what does that mean for your measurement program, if you rely on metrics to measure CX performance?
Well, as so often these days, technology assists us in predicting metrics based on the interaction that took place. What’s equally important here is that you chose the right metric to be predicted! While you can predict tNPS, same as you can ask tNPS in a survey, it doesn’t mean it’s the right metric to use.
Wrap up
A somewhat controversial topic, but hopefully we did a good job explaining the workings behind the concept of tNPS, and why you really should stay away from using it!
To be clear, we're not saying don't use NPS. It's a valid metric in our CX measurement toolbox. What we're saying is: we wouldn't recommend using tNPS (😉).
If you find yourself stuck with tNPS because “you’ve always done it this way” and “it’s too ingrained in the business and too hard to change”, come and speak to us. Nothing is worse than continuing with something that doesn’t make sense, even harming your insights, simply because you’re afraid of a “break in tracking”. We've amended 2 enterprise-level VoC programs to move away from tNPS, even though it was a key KPI across the business. There's always a way!
As always, for any questions don’t be shy to get in touch with us, we’d love to hear from you and support you on your CX journey 🙏
#CX #VoC #customerfeedback #surveys #metrics #NPS #tNPS #KPI #CXmeasurement