The co-existence of chatbots and live agents has always been a debated topic. To settle that, lately, we interviewed 13 chatbot experts and the findings were clearly in favor of a bot+human hybrid model.
Experts believe that despite the advancement in AI & NLP, chatbots are still very primitive. They are only as intelligent as they are programmed to be. So there will be moments where bots will stumble. This is where a human agent’s empathy and intelligence are required.
So in a bot+human hybrid solution, chatbots are your first line of defense and human agents come to play when that line is breached. Now for a hybrid solution like ours, it becomes critical to ensure that the user experiences a smooth transition from the bot to the human agent.
Not getting the right balance will frustrate your users and when the stakes are high it might just spell doom.
To safeguard your users from such a scenario, the system should know when to handover a conversation to an agent.
As mentioned earlier chatbots are only as smart as they are programmed to be. So there will be cases where the user queries will be out of a chatbot’s scope.
So in such cases, the conversation should be handed over to a human agent. Now for that, the bot must be smart enough to know the end of its’ capabilities and suggest an alternative. This is where the majority of chatbots fail. Often you will find bots asking the same question, even after repeatedly failing to understand the intent.
Which of these bots, would you prefer to chat with? The second one I presume as it gives the option to switch to a human agent whenever it fails.
Initiating chatbot human handoff from a user-driven menu is perhaps the simplest and safest technique. For this, you can program your bot to provide the user with a menu of predefined options after every message.
Every menu typically has a set of tasks that the bot can handle independently and an additional option to “Chat with an agent”. This kind of hand-off doesn’t require AI or NLP and the bot simply transfers control to a human agent, whenever the option to “Chat with an agent” is selected.
The drawback here is there will be times when the user would want to chat with an agent for queries that could have been handled by the bot. Consequently, the basic purpose of having a bot to free up an agent’s time for complex queries goes to vain.
Natural language understanding and sentiment analysis have made chatbots smart enough to infer the mood of the user. This can be helpful in understanding whether the conversation is going the right way.
So whenever the chatbot senses that the user is edging on frustration, it can simply slip in “Chat with human agent” as an option. The end-user can then select the option if he believes the chatbot to be incapable of solving his problem.
Not all support issues are of the same importance and they should be handled accordingly. The best way to facilitate is to train the bot to understand the criticality of the situation based on business jargon in the conversation.
For example, look at the illustration below:
In both cases, the triggering keyword is “server failure”. In the first case, the user simply wants information about the precautionary measures in case of a server failure, while in the second case the user wants to report the occurrence of a server failure.
It makes sense for a chatbot to handle the first case by providing contextual answers, but in the second case, it’s best if it is handed over to a human agent. An intelligent bot will be able to differentiate between the two scenarios and act accordingly.
Even the most experienced support agent often requires a second opinion and your chatbot is no different. So it makes sense to allow human agents to monitor chatbot conversations esp. for complex issues. This collaboration can be most useful for technical troubleshooting.
For instance, consider a scenario where a user has contacted the support desk of an electronics company to report the malfunctioning of a device. Advanced machine learning models help the bot to undermine the likely cause and also suggest the best possible resolution. However, it makes sense to cross-check the same with a human agent before suggesting it to the customer.
An internal monitoring system where a chatbot can seek guidance and authorization from a human agent whenever the confidence level is low can be very useful in such cases. The agent simply needs to review the proposed solution and give a go-ahead by clicking a button. So the bot is still doing most of the task but the final authority remains with the human agent.
The preceding section has some of the scenarios where helpdesk bots need to handoff support tickets to human agents. This brings us to the next question, “How should a bot handle this transition?”.
It is important for the user’s overall experience, that the handoff is seamless. Remember the user is probably already edging on frustration as he has contacted your support, and you wouldn’t want to add to that.
For a better understanding of how to nail the chatbot to human handoff process let’s break it down into 3 phases and discuss the best practices of each:
This is the first phase where any one of the above scenarios have arisen and the bot needs to transition control to the human agent.
This phase involves two elements:
Handoff trigger: As discussed briefly in the first section, the bot needs to understand the limitations of its abilities. Whenever the user query is out of scope for a bot, the handoff trigger should be activated, and the bot should provide the user an option to “Chat with a human agent”.
The best practice of presenting this option is by means of an actionable button, which the user can simply click and get connected to a human agent.
Another way is the bot asks the user whether he would like to get connected to a human agent and then does so based on his response.
Acknowledgment: It is also important to let the user know about the transition while it is happening. This will set the expectation that while the handoff is in progress the ticket is unassigned and questions sent during that time will only be answered when an agent is assigned.
This is the intermediary phase where the support ticket is put in a queue. During this phase, it is very important to manage user expectations, as the wait time might be long.
A good practice is to display the user’s position in the queue and answer all incoming questions with a default response like “waiting in queue” during that time. Another good-to-have feature is presenting the user an option to confirm if they would like to “wait” or file an email issue instead when the wait time takes longer than expected.
The other aspect to consider during the wait phase is specifying how agents will be assigned to the waiting users. Typically there are two ways to handle this:
First In First Out (Round Robin): This is a rather simple model where the first ticket in the queue is assigned to the next available agent, that is tickets are assigned in the order in which they are filed.
Contextual Assignment: This kind of assignment involves complex logic and assigns agents based on contextual factors. The factors can be anything, for e.g. the geography or the language of the user. At Applozic we use the nature of the query as a factor, i.e. based on the platform type (Android, iOS, web, etc.) we assign it to the right support engineer.
The final phase in the chatbot human handoff process is when a human agent finally takes over the conversation from the bot. This phase is most neglected of the three and hence overlooked while designing the handoff flow.
It is important to understand that while the human agent has just joined the conversation, the user has been there for long and has shared a lot of information already. The user, therefore, wouldn’t be willing to go through the same process again and answer repetitive questions.
A well-designed handoff flow ensures that the bot shares the entire conversation transcript with the agent. This can be done in the form of text messages in the agent’s chat window or can also be sent as an attachment via email. We recommend the former as it is more real-time.
It’s not enough to just have a process in place to handle the chatbot human handoff effectively. It is equally important to have a feedback loop in place to identify any scope of improvement to bring human involvement to a bare minimum.
The first step towards analyzing any problem is to trace back to its origin. In the case of bot-human handoff, it is the point where the user requested for a human agent. So being able to trace each call for human help back to the conversation step in which it occurred can be incredibly helpful.
It will help you look for trends in order to understand the likely cause. Perhaps there are questions that the bot is not able to respond to or an answer which is confusing for the user to comprehend. Another likely reason can be the user might have to go through too many conversation points before reaching the desired solution. An in-depth analysis can help you uncover many such reasons.
The findings will help you improve the conversational design of the chatbot. This will be a step towards reducing the frequency at which your users require a human agent. After all, isn’t that the reason for building a bot?
There are two assignment methods, as described in the section of Wait Phase best practices above. You can enable both using Kommunicate.
There are two methods to enable the bot to human handoff in Kommunicate. You can either do it from code or enable it while creating a bot from the UI itself. While creating your bot in Kommunicate, you will be asked to enable the bot to human handoff in the last step.
The second way is to add a small code snippet. The code sample below shows the method of chatbot human handoff for Dialogflow bots integrated with Kommunicate
- Set action input.unknown in Dialogflow. Dialogflow automatically adds this action in response, if default fallback intents are enabled. It means whenever fallback intent is triggered the conversation will be assigned to an agent. This will be done based on your preferred conversation routing settings.
- Assign the conversation to an agent when an intent detected. Set below JSON as the custom payload in Dialogflow. Specify the agent’s email id in the KM_ASSIGN_TO parameter. If the KM_ASSIGN_TO parameter is left empty conversation routing rules will be applied.
"message": "Our agents will get back to you", //optional
"KM_ASSIGN_TO": "agent's userId" // pass empty string to use conversation routing rules.