Echo Dot (3rd Gen) - Smart speaker with Alexa - Charcoal

Use your voice to play a song, artist, or genre through Amazon Music, Apple Music, Spotify, Pandora, and others. With compatible Echo devices in different rooms, you can fill your whole home with music.

Buy Now

Wireless Rechargeable Battery Powered WiFi Camera.

Wireless Rechargeable Battery Powered WiFi Camera is home security camera system lets you listen in and talk back through the built in speaker and microphone that work directly through your iPhone or Android Mic.

Buy Now

building an intelligent chatbot in less than 12 hours (Part 2)


Felipe Magalhães Bonel

This is the second of a couple of articles about using product design approaches to create effective virtual assistant experiences, a method created for and first experimented at Bots in Live, a workshop that took place at Red Bull Basement in São Paulo, Brazil, in June 2019. The first article of the series can be read here and our final product can be tested here.

The second round of the Bots In Live workshop began as thrilling as it was productive. The class was eager, high spirited due to the first half one week before, and willing to leave Red Bull Basement with a rewarding “It’s alive!”. Their determination to make things turn out great was 90% of our sprint’s success.

We began reviewing the artifacts created during our first section and, to our surprise, the material made even more sense after 7 days of decantation in our heads. Unfortunately, due to a cruel coincidence, home abuse was a trending topic in brazilian news media on the previous week (and journalists treated it in a very bad and frivolous way, since it involved a famous football player- more details here). Directly, this somehow contributed to the class’ will towards creating something that might be able to bring some real positive outcomes for the product’s users.

With this scenario ahead, we began to work out some possibilities to get things moving fast. Our first practical step was centered around developing the features that popped during our last brainstorm — and the parallel research conducted by some of our participants was fundamental for us to move forward in a very cool way. I would like to highlight here the contributions brought by Juliana, who managed to extract a true dataset from home-abuse-themed Facebook groups; and by Diego, who came up with an extremely rich material created by our Public Ministry here in Brazil. Starting from these inputs, we were able to clusterize six types of violence (physical, sexual, psychological, patrimonial, moral, and virtual) and decide that the “anamnesis” conducted by the chatbot would result in different recommendations for every kind of identified situation.

From this point on, the class put the product views aside for a moment in order to discuss the narrative fabric about to be weaved to cover the chatbot’s personality.

Chatbots built without a narrative structure to back them up them might function very well for simple, unpersonal tasks — RPAs, for example -, but they do not fare very well when it comes to engaging users and building empathetic environments for them to feel comfortable in a conversation, something that our main objective was all about. Having this in mind, I proposed to my class a simple, yet challenging task: to look upon constitutive aspects of the chatbot’s personality. They had the mission to create a credible character regarding its characteristics, shags and the very way that it would present itself and relate to its audience

I wish I could have used the 16 Personalities model to shape in deeper details the definition of character that would be the voice of the AjudaMaria (HelpMary) bot. I am extremely convinced that the broad range of possibilities that this framework provide would have helped us reach a more dense and convincing result. On the other hand, I felt unsafe about the time that each question/step would take to be discussed with the class, so I chose to use Jung’s 12 archetypes model, mostly because it is widely used as a branding tool for marketing professionals.

The advantage in using this framework resides precisely on what it has to offer in terms of subtleness: the collective unconsciousness to which we all can positively connect. More than that: having at hand established models with qualities, failures of character, strategies and clear approaches fostered very productive discussions. After long debating sessions, the class was divided between moving forward with either the Caregiver or the Wizard: the dilemma was basically hovering upon deciding between a character that would be a safe space to shelter users or to focus in a tone of voice would represent the catalysis of change.

At this point, things started to get really interesting: taking in consideration the features that we thought essential for the MVP, and the product purpose accorded with the team, the second option became everybody’s favourite in a blink of eyes. From this point on, it became really simple to design a character that would be able to be sheltering and sober at the same time. Which used direct, firm language, but in a docile way. And that was able to, as a real subconscient mentor, lead the users towards a path that could make their liberation journey possible. Three questions were raised at this point: who was the voice conducting the conversations? How it would relate to the users? What would be the history told between the lines?

To this point, 9 out of 12 workshop hours have already been used. We had little more than ¼ of our time to sum up, lay out, compile and publish everything we have learned so far to ensure that we would get to the end of the day with that Dr. Frankenstein feeling on top of everything.

We then began to transform each and every functionality previously planned into a great flowchart that could be able to synthesize what would be the bot as a whole, in a true high level design. With this important vision in our hands, I could finally divide the participants in two working clusters: the AI Trainers and the Conversational Designers.

Our first squad, made up by two ladies participating in the workshop, took the responsibility to dive into the material brought up by Juliana — reports by women who suffered aggressions from their partners — to understand how the victims would behave and talk while describing these situations. This material would lead to the creation of a speech corpus that would be inserted and balanced inside Dialogflow’s interface (Google’s free-to-use NLP API). Their job, then, was to shape the utterances dataset that would allow the bot to understand different inputs from users and redirect them to the best flow available. The girls would, then, be our user intents guardianesses and trainers.

This labour of dataset enrichment was very important for us so we could size our problem properly and, only then, establish a trusty taxonomy in terms of language processing. At the end of the day, our trainers were able to put up two NLP nodes — one with general intents and small talk and the other one with context-specific data on abuse types — with more than 650 training examples. The richness of details and semantic variations of these inputs make AjudaMaria able to comprehend a wide and complex range of terms — and even make some associations between them.

The other squad took care of the bot’s content design: using the material fetched by Diego as their conceptual north, they got the mission to transform each branch of that complex flowchart drawn earlier into dialog excerpts that would build their chatbot users’ experience. For such, we chose Chafuel (free Facebook Messenger bot publisher) for its intuitivity and quick learning curve.

Here, the attendants focused in solving questions centered about understanding their users’ behaviour and raising hypothesis to be proven after the go live. What kind of word would be more fitting for each situation? What should be delivered for the user who instantly rejected the bot’s first approach? And what about total rejection — how it should be treated? For this specific kind of step, should we use buttons, dynamic galleries or an open question?

All of these doubts turned into an end-to-end experience, capable of turning a strange flowchart skeleton into a solid communication fabric, quite efficient for its first proposition. Our conversational flows were ready to rumble.

In order to put that little cherry on the top of our sundae, I have decided to give my class a hand and help them join their two workfronts, since the toys we were using were all sold separatedly and did not have native integrations between themselves. To overcome this little gap, I have used a little node.js script written by Edwin Reynoso, which helped me to partially connect our flowbuilder to our NLP provider. To fully close the circuit, I have set myself up the JSON commands that would orchestrate Chatfuel’s API towards the right block for each intent. This technique can be found on this article here and is very useful!

Well, at last! The team had finally done it. They had answered the last questions that needed to be clarified before the chatbot could rise from slumber and start talking aloud: what we needed to comprehend? What we needed to communicate? How do these questions relate to each other, and what should we expect as a valid result?

Without further ado, an integration here, a deploy there and…

When you put your hands on a project like this, there’s no better feeling than accomplishing your mission.

And, of course, you can only consider your mission accomplished when you see that little chattering robot talking a blue streak. The moment that we put that Messenger Window on the TV screen and all of us could see our newborn AjudaMaria giving away tips and orientations on how should women find shelter against physical violence; teaching how to follow the legal process to get solid help; presenting communities focused on helping victims overcome these hard-time issues; and sharing important data on this, that is truly a serious issue here in Brazil, everyone in the room, myself included, went speechless. We felt that the sensation of filling a purpose is way more gratifying than the one that you get when you follow your duty.

Having the opportunity to build something that makes sense and that can help shift for better one’s life is truly the best outcome that you can get from directing a project. Everyone aboard felt this way, and that’s why we still keep monthly meetings to sustain AjudaMaria and help it turn into a bigger, broader social project!

Come on! If this is not the definition of awesome, I really do not know what it is.

Beyond that, I felt that I have contributed, somehow, to bust some troubling myths and superstitions about Artificial Intelligence. Having the opportunity to do that in a hands-on environment and with an incredible purpose to go after with a nice party of people was golden for me.

So, when AI comes into discussion, I would recommend always looking at it with a critical, but sober perspective. You can get excited from it, everybody can, but not dazzled. This technology is not occult magic nor alien stuff, and it has come to help us in many ways, but this does not mean that it will save our lives, let alone destroy them.

Anyone with curiosity to find the right tools and time to learn how to use them can do it quick and can do it well. The only prerequisite, as everything in this life, is to follow the winding Socratic path of asking yourself the right questions at the right time.

Good answers will come from it, you can bet! 😉

Read More


Please enter your comment!
Please enter your name here