We are almost there! We have a basic server configuration with Express and Node, and we have managed to get the message and user’s coordinates and we only need now to connect Wit, Node and our Recommendation System developed in part 1.
Let’s go ! 💪🏻
Integrating the Chatbot from Wit
If you remember well, we were able to bootstrap a Chatbot with Wit in part 1, capable of recognizing various intents.
More visually, here is the part we will dissect here:
In the code snippet code above, we use the sendMessage function from the wit.js module, and we will also need the extractEntity function from part 1. 🧰
Let’s interact with the two functions exported from the wit.js module:
Few things to note:
- Once again we make use of env variables for accessing our wit console from Express
- We create a client for accessing wit with the help of the node-wit package, and create our wrapper function around it
- We use the same extractEntity function to select intents only if the chatbot is 80% sure (or more) about it
Let’s recall the intents we had defined with Wit:
We can split them into two important parts, the ones that can be answered easily with pre-made templates of responses, and the ones that necessit a response from the Yelp API.
Looking back at the global messageController code, we can now follow the comments we have set up, and extract the intents:
All of the answers are treated and processed in the request module, that we will discuss just after that. 👀
Oof, there is a lot of things going on here, let’s dive into it:
- If the bot didn’t understand the user’s query or if we can reply with a quick response, we handle it directly with the request module.
- Otherwise, we get the user’s location with a custom module we’ve built on top of Google Maps (because some queries will use it) and process to call the Yelp API only if the bot did’nt recognize a ‘Recommend’ intent.
- In any case, we send back the data to the caller with the send function from Express
Now let’s look at the request module 👨🏻💻
For each simple intent that we have, we generate a quick pseudo-random response, and send it back to the caller.
Yes, pseudo-random, meaning we have predefined responses for each simple intent, randomly select one, and add a GIF to the response.
Here are some samples reponses for each intent:
Adding GIFs to Chatbot responses make it really more stand out and more natural
For the queries requiring a Yelp API call, we had to use GraphQL to query the appropriated endpoints, and as a function of the intents:
Each query is constructed with its specifications, let’s see an example for the search intent 🔎
And then we simply call the Yelp API with whatever query we have built from the extracted intent:
Once again, we have to get a YELP_API_KEY in order to be able to access the endpoint. 🔓
One major difference with the quick responses is that here we will display the actual restaurants concerned from Yelp (for example, we would have displayed the searched restaurants with the previous query).
One little trick that we have used for making the bot’s responses more realistic than just displaying the restaurants, is to add the same system of quick responses but with custom arguments. 👊🏻
Here are some sample textual responses for the search intent:
Here we make use of ‘#’ to be able to input custom intent-related variables at runtime 🏃🏻♂️
For example, if we search for restaurants in Paris, the bot will recognize the type: restaurant and location: in Paris and with our function, he will output:
You can find a selection of restaurants in Paris. 🏨
Now that we know how to connect our Chatbot to the Node API, let’s look at integrating the Recommendation System. 🕵🏻♂️
Integrating the Recommendation System from Flask
In part 1, we were able to build a Flask API that could handle the Recommendation of restaurants on its ‘recommend/’ route.
That means that it won’t really be that complicated to make the Node and Flask API communicate! 📣
Here is a small diagram of the last part we are building:
Remember, we called the request module when the intent detected by Wit was ‘Recommend’:
So now let’s see how we implemented the recommend function:
Overall, we are calling the Flask API with axios to extract the restaurants ids from the Recommendation Engine, we request them from the Yelp API just as we would do with another intent, and build a response with the same functions that you’ve seen above ⬆️
The same system is used to display a message to the end user, and to display the restaurants whether coming from a recommendation, or coming from a simpler intent.
In the case of the Recommendation System, here is the detailed process 👩🏻🔬
Types of sentences that the user can input:
- I want to eat some italian pizza!
- Recommend me restaurants with fresh pasta.
- Can you show me restaurants with outstanding views ?
For example:
1 — Node.js API request :
Requesting the Node API :Endpoint:
POST /message
We call our Node API we this structure:
2 — Wit.ai Intent extraction :
Wit.ai will exctract intent and entities from the user question.
The sentence :
I want to eat some italian pizza!
Will return :
3 — User reponse generation
After getting all the information in order to answer the user demand, we use Natural Language Generation to display a response.
4 — Node.js API response
Gathering all of these steps, our API will return a sample response :
Now that we have covered the two main components of the backend infrastructure, let’s talk about environments and deployments.
1. Will AI cause mass unemployment?
2. 10 Tips on Creating an Addictive ChatBot
3. What 10 Billion Messages can Teach us About Making Chatbots
Environments & Deployment
As you saw accross this article, I wrote quite a lot about the .env files containing all of our environnement variables, in order to separate the concerns.
Indeed, we have a different configuration for the development environment and the production one.
We needed a way to launch multiple services at once, the backend written in Node, the Flask API for the recommendation system, and later, the React front-end. ⚛️
Docker to the rescue ! 🐳
We use Docker and more specifically docker-compose in order to be able to add or remove services as we needed, and to get rid of dependencies conflicts.
A quick definition for docker-compose,
Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration
Here is an overview of our docker-compose file:
We will se the part related to the front-end in part 3, but we can clearly see how it mirrors the diagram of the APIs we kept showing in this article.
Here are some resources that will help develop Docker-based environements:
Now for the production environnement, we used Heroku, as it was free to host our 2 APIs (Node and Flask).
From their docs,
Heroku is an ecosystem of cloud services, which can be used to instantly extend applications with fully-managed services. Using an existing, high-quality service is something that empowers developers — they can build more, faster, by using trusted services that provide the functionality that they require.
It was a great way to easily manage our 2 APIs as the configuration is quite simple, and free. 🚀
Here are some official articles that will help you get started with deploying apps on Heroku: