In the first part of the series we have shown how to define intents and capture dialog attributes using AWS Lex and AWS Lamda functions ( https://smartlake.ch/onboarding-virtual-assistant-for-banking-behind-the-scene-part-i/). We will dive now a little bit deeper into the mechanism we have used to be able to recommend some products based on the user answers.
There are multiple ways to implement product recommendations. I can refer to a previous post on that topic: https://smartlake.ch/personalizing-client-interaction-in-financial-services/. For this experiment we will use a simple classifier that we will train with some artificially made examples.
The classifier is implemented using Microsoft Azure Machine Leaning studio. We will pubslish a trained model from the ML Studio through an API and call it from AWS. It is not the best and most efficient way, but this is fun to show the interaction of multiple cloud services.
Lots of virtual assistants are merely questions and answers systems. They do not keep the context and hence do not have the ability to follow a structured dialogue. Most of the assistants also can not branch between one topic and the other. The assistant asks the user to repeat same answers again and again.
Finite state machine
A full dialog design is required to be able to jump from one topic to the other. Natural conversations do not follow a sequential path most of the time. A useful way to design a dialog is to use a transition state diagram or table for example. We discussed also in an earlier post https://smartlake.ch/conversational-assistants-in-banking-designing-flexible-dialogues/ . Designing properly the dialog and implementing it as a finite state machine becomes particularly key for complex dialog assistants. Users can therefore jump from topic to topic and the assistant has a defined state for each user action.
Implementing a finite state machine and keeping the context can be done in different ways in AWS Lamdba, but in our case we implemented with simple if — then statements, checking what is the intent of the user and depending on what he has already given as information, we can either proceed or ask for more.
AWS Lex provides a way to keep the context and pass it back and forth between the Lamdba function and the chatbot and the serverless function. No data is stored in the lambda by definition ( which is a great advantage to respect data privacy). You can read a full description here https://docs.aws.amazon.com/lex/latest/dg/context-mgmt.html. In our experiment, we keep a number of attributes.
In the below code extract, we can see that we are checking if the country of residence has been identifed and in which case, we add it to the dictionary of session attribute with the key ‘countryOfResidence’.
Since we have no data we have defined a few training examples for our classifier, that we can use to bootstrap our model:
The training data is then fed into an Azure ML model using a multiclass random forest classifier
Once trained, the model is then exposed through a web service that can be called by our AWS Lamdba later on:
We can now integrate the web service call into our AWS Lambda and fill up the API parameters with the dialog context as shown below:
We then get a list of possible products with their corresponding proababilities and chose the product with the highest probability to propose to the user:
The recommended product has 25% probability that it suits the potential client preferences.
4. Chatbot Conference 2020This is better than a random pick, which would be around 8% as we have 15 possible products in the list.
The final result displayed in a test user interface looks like this
The current implementation of the model is not incremental and therefore needs to be retrained with new data to make it more accurate. Unfortunately that would mean storing the data in order to reuse for training. In order to circumvent that, we will show in the next article how to implement it using incremental ( also called online) learning.