Recently, we at Hubert.ai came in contact with a large tour operator who regularly asked their customers to rate them in three areas using an NPS scale followed by an open text question to motivate the score.
After using this practice for a number of years, they had collected a huge dataset of responses. The problem they had started to see was that a large chunk of their responses (31% in fact) was non-informative and they asked us to take a look at how they could get something more useful from their hard-earned survey participants using our chatbot Hubert.
“I mean, if we are already bugging them to get their feedback, we should really give it our best efforts to extract something useful out of it.”
For example, this is a set of characteristic motivations from promoters who responded 9 or 10 in the NPS:
“Everything was OK”
“Exceeded my expectations”
“Everything was good”
The same pattern, but reversed, is found in the detractors (0–6 ) motivations:
“All the hassle”
“The arrival sucked”
“Lousy customer focus”
“Nothing was good”
Similarly, many comments contain some parts that are useful in combination with something pretty vague:
“The hotel was incredible, but the food was substandard. Weird-tasting and poorly cooked. I heard about multiple cases of food poisoning while I was visiting”
“Overall my experience was bad but the included trips were really interesting. we got to know our destination in a whole new way”
In between the extremes, some comments were plain confusing when correlated with the NPS score:
“Good hotel“ (NPS = 4)
“Something that really sucked was that you couldn’t charge your phone during the flight” (NPS = 3)
“Very nice personnel” (NPS = 6)
These kinds of responses are typically common among companies who are using traditional surveys when collecting customer feedback. Thousands and thousands of interactions don’t produce anything in terms of actionable, usable or helpful data. And that’s a shame.
How we turned those 31% non-informative responses into 4%
At Hubert.ai, our approach when it comes to feedback collection is that it should be effective, engaging and relevant – but above all smart.
We’ve integrated a real-time analysis engine into Hubert that bridges the gap between quantitative and qualitative data. Just like the human interview facilitator that he tries to mimic, Hubert continuously interprets incoming responses and adapts the conversation accordingly.
Our client was thrilled when we showed how Hubert handles the kind of non-informative data that they had become accustomed to.
To start with, it’s important to know what you are doing well in order to steer resources to the right area. We’ve trained Hubert meticulously to dig out the specific areas that respondents found especially good.
In this example, Hubert identified that the initial “Everything was OK”-comment didn’t contain much valuable information and dynamically inserted a relevant follow-up question to dig deeper.
Among our initial examples, we had some comments containing both vague statements and something valuable. This is how Hubert can handle such events:
Huberts’ real-time analysis is also used to weight open-ended responses against NPS-scores.
Motivating a very weak NPS score with something trivial such in the case above triggers Hubert into asking for additional motivators for the low score.