Playing with Dialogflow API, I created a simple chatbot which can be "taught" (expanded through adding new intents) during the dialogue with this bot. That is if a bot doesn't understand something, it asks if it should store this phrase with a corresponding response as a new intent. Thus this bot can be expanded in a rather "natural" way, through the dialogue (as people do ;). Please see the detailed video tutorial about how to recreate such a bot.
My company, Master of Code has many cool traditions. One of them is Secret Santa - this is when each team member is assigned a random person to whom he/she prepares a New Year present. And then there's a party with festive distribution of this gifts. This year (that is for NY 2020) I decided to prepare something more than just a present and created a small IT quest which included a chatbot.
Please see this step-by-step tutorial showing how it was done and feel free to use this idea/bot with your Company and/or friends. P.s. The idea of a quest was inspired by this topic on Reddit about Secret Santa and The Architect (user squeakysqueakysqueak).
I decided to create a city quest in the format of a chatbot. But this time I decided to go further and try to create a detailed video tutorial on how I'm currently creating my chatbot hobby projects. In this series of 11 videos (~2h in total) you can find a step-by-step tutorial which is a summary of about 150 hours spent by me on this project (during 2019).
I'm describing how I've got the idea of a city quest chatbot and my reasoning of using chatbot format for such a project, selecting tools (Chatfuel, node.js, Glitch, Airtable, Google Vision API), preparing the contents/scenario for the quest, composing the bot's flow in Chatfuel, writing webhooks on node.js on Glitch (including simple image "interpretation" using Google Vision API), using Airtable to store users' data etc. In the prelast video there's a screencast of passing the quest.
I share the webhooks code (as a Glitch remix) and can send collaborator invites on Chatfuel for you to access the bot's flow (the Facebook Messenger bot is not public so far).Here are some of the videos as an example (please see the full playlist):
On May 22, 2019, our company co-organized the event called IT Career Day 2019, an annual job fair where companies-members of IT Cluster of our town (Cherkasy, Ukraine) present themselves and promote the IT sphere in general.
I created a chatbot-quiz for this event where the users could answer questions and win real candies and other prizes (which were given at the Company's booth). Some of the questions were to check user's attentiveness or maths skills but most were about knowing IT life and IT humour ;)
The bot is built on Chatfuel with a custom backend written on node.js and deployed as a Lambda function to AWS. The main purpose of the backend is connecting Dialogflow to Chatfuel (plus also generating verification code and providing some other minor features). The bot uses Chatfuel's built-in export to Google Sheets, send-to-email and human handover functionality.
When the user finishes the quiz and clicks "Get candies" a notification email is sent to admins containing user's info (name, gender, locale, a link to FB profile info etc), the verification code, number of candies that the user has won and also his/her answers to the quiz. Similar results (but without responses to the quiz) are also saved to a Google Sheets document.
Navigation in the bot is possible using quick reply buttons and text commands (thanks to Dialogflow). The bot reprompts the last block of contents in case the user entered something irrelevant or accidentally entered some text so that quick reply buttons disappeared.
Bot launch results: During the event about 150 people played with the bot with ~60% coming to the Company's booth for the prize. The users seem to have liked the bot and also were often surprised when we called them by name even before they introduced themselves ;) (having their name and often a real profile photo received from the bot).
Found some time to get acquainted with ImageMagic. Messenger has powerful built-in drawing capabilities but I thought that it might be good to make a chatbot able to process images according to given templates (e.g. add a company logo or create some stylized stickers for sharing in the conversation). I failed to finish this bot due to time limitations/other more important tasks but got some useful experience and additional practice in chatbot building. Maybe I'll use/reuse it in some other projects later.
The flow was supposed to be the following: the bot greets the user and offers a list of templates to choose from. Only 1 was template was finished - a so-called "Polaroid" (converts a photo into the polaroid-style image with custom text title). Many other templates could be added (e.g. I've thought of "Visa" - upload photo[-s] and indicate a country to get a photo collage with some visa-style stamps/stickers added, "Logo" - adding a company logo or other symbolics to uploaded photos etc). The user chooses a template and then is asked to provide the needed data (photos, titles etc). The source and the final processed images are stored on AWS S3 with links saved for this user in DB (so that one could create own sticker "packages" in Messenger).
Since my last update of this site I launched my first multi-platform chatbot - Podervianskogobot.com. This is a bot which replies with popular quotes (drawn on stickers) from plays by Les' Poderviansky (the bot is in Ukrainian) and allows to read and listen to respective plays performed by the Author. Les Podervianskyi is a Ukrainian painter, poet, playwright and performer. He is most famous for his absurd, highly satirical, and at times obscene short plays, many quotes from which became popular memes (more on Wikipedia).
The bot was made using Node.js, Microsoft Bot Framework and npl.js and is available on Facebook, Telegram, Skype Web. Actually, this is the 2nd "generation" of the bot, and the 1st (which wasn't launched and was supposed only for Telegram) was made using Node.js, telegraf wrapper of Telegram API and RiveScript.
I started to work on it last summer (>6 months ago), before I started to cooperate with Master of Code. Thought that it would be funny to make such a bot, and also had a chance to try several new things, mainly RiveScript and npl.js (inspired by this article). This was also my 1st 'live' bot on MS Bot Framework and the 1st bot for Skype and Web.
To make this bot I:
So far the bot had about 30 users from Facebook, ~10 from Telegram and a few from Skype and Web version.
So starting from September 18, 2018, I switched from self-educating in hobby mode 2-4 h/day to building chatbots full-time for Master of Code. So I will probably have less time for my side projects but will try to hold on ;) In October I got acquainted with Actions on Google and built 2 simple voice bots for Google Assistant platform using Dialogflow and Cloud Functions for Firebase. One of these bots, BestMovieQuotes, was approved by Google and is publicly available now (though not for all countries and/or locales - you may need to switch to English as a basic language on your device). So it's a quite simple bot, actually, a stripped down Dialogflow's small talk agent that answers with audio-quotes from famous movies (like "The Godfather", "Casablanka", "The Lord of the Rings", "Titanic" etc.). It gives more or less relevant responses to phrases like 'hello', 'how are you', 'what's up', 'what is life/love', 'bye' etc and you can also ask it for a random quote. You can try it on your smartphone in Google Assistant app (Android, iOS) or on devices like Google Home etc. To invoke the bot please say something like 'Ok Google, talk to Best Movie Quotes' or 'Ask Best Movie Quotes for a random quote'.
P.s. A few words about how I got this bot approved and included into Google Actions directory: it wasn't so straightforward, I succeeded only after 3 tries ;) The problem was that I wanted my bot to conduct a more or less 'natural' talk, listening to user's phrases and responding with relevant quotes. But the guys approving the app wrote that "During our testing, we found that your app would sometimes leave the mic open for the user without any prompt". I tried to prompt the user to continue dialogue using the quote "Talk to me goose" from "Top Gun" which I added after each response but this variant was also rejected. So finally I put an explicit 'robot-read' prompt after each quote - it's not really what I wanted and sounds a bit weird but probably is more correct.
As for the 2nd bot. We had a Halloween party here at MOC, and I also built a simple voice bot especially for this event - CreepySounds. I didn't submit it to Google Actions directory so this bot isn't available publicly. But in case you'd like the idea you may use my code and easily make one for yourself (it should be accessible as a test version on devices where you're logged in). This bot responds to any voice input by a random scary sound (taken from Google sounds library, mainly Horror sounds).
Cherkasy). I came to coding from biology (have a master’s degree in human physiology, unfinished PhD; while studying at school and university was a winner of All-Ukrainian Biological Olympiads). I have >10-years of experience as an English>Russian medical translator (TA Medconsult), translated for Novartis, Pfizer, Roche, Bristol-Myers Squibb, Sanofi, Regeneron Pharmaceuticals etc. Passed a 2-month internship in a molecular biology lab INSERM U963 / CNRS UPR9022 (Strasbourg, France).