The Paradox of Conversational Design

(First published on ChatbotsLife in July 2017)
Chatbots are evolving at a rapid pace. Their evolution is fuelled by the huge amount of interest in getting them to be used as buddies, assistants, companions, and what not (probably even adversaries) in various conversational settings. However as we embark on the development process, what stares us in our faces is the paradox of conversational design.

Conversational design is the art of designing templates for conversations that the chatbot is designed to have with its users. This is very much a part of the great user experience (UX) that we promise users along their journey with your organisation.

Conversation design includes templates for ideal conversations between the chatbot and its users. But that’s not enough, it needs to also tell the chatbot how to handle exceptions: what if the conversation derails due to some reason. Take a very simple future conversation between your chatbot and a user. Figure out all the points in the conversation where things can go in unexpected ways.

For instance, if the user seeks information (e.g. flights between a and b) and the database where this information is stored is not contactable. Another instance would be when the user’s request is ambiguous. Yet another would be, when the user misunderstands a chatbot response.

You are now beginning to realise the overwhelming number of non-ideal conversations that can happen between the chatbot and its users. How do we actually brainstorm all the possibilities at each point of the conversation? It may be technically possible to enumerate the paths that the chatbot might have to take (say due to database and other technical problems, etc) but to gauge the user’s possible responses at each step is definitely not a trivial task.

One way to address this problem would be to look at data of how users behaved and reacted in previous conversations and derive the best strategies from experience. However, does such data exist? Human-human conversations between users and live human operators/agents/advisors are not really well suited for this exercise as a human capability to understand the nuances of natural language conversation is much far ahead of chatbots.

To learn the rules of engagement from live human agents and to mimic it in chatbots is not easy. We may be ok to consider a subset of that conversation, but even to separate the simple mimic-able exchanges from the rest is a difficult task. To gather data, we therefore need to dumb down a human operator to the level of a chatbot and make them have conversations with users. Such data, by implication, cannot come from the past. Its in the future.

Another way of doing this would be to build a chatbot with the expertise we have got and have them chat with users. By doing this we can collect data and have a look at where the chatbot has failed to do a great job. This will lead to iterative improvements in the chatbot’s conversational capability. However, the risk is that you will disappoint a lot of early users with a chatbot with mediocre conversational strategies.

Therefore the paradox is:

We need data to learn great conversational strategies. But to generate such data we need to build a chatbot with great conversational strategies.

The question is, how we get past this paradox to create great conversational experiences to chatbot users?! Please do share your valuable thoughts below. 🙂

Try my new book on Chatbots and Conversational User Interfaces Development!

Advertisements

How to make your chatbots intelligent?

(First published in Chatbots Magazine in Oct 2016)

Chatbots are computer programs that can have a conversational interaction with human users. By default, a chatbot need not be intelligent. What it needs to be is useful and usable. For instance, a chatbot whose task is to collect information from users, will simply ask users questions and provide them an easy touch and swipe style response mechanism using buttons and carousels. While it does the task it is designed to perform, it cannot be counted as an intelligent chatbot.

But this is not to say that Chatbots need not be intelligent at all. As the task carried out by the chatbot becomes more complex, the need for it to be intelligent increases. It is intelligence that makes a complex conversation easy and effortless. Here are three dimensions that you could pay attention to if you want your chatbot to be an intelligent one.

1. Perception

Perception is the part where the chatbot gets to know what the user wants. On platforms like Facebook Messenger, chatbots can present users with a set of buttons to get their input. This is an easy and robust approach to getting user inputs. However, this approach lacks the fluidity of human conversation. For instance, imagine a user searching for comedy shows and he is being asked to enter the date information. He is given a list and is asked to pick one. However, he would probably like say “any Wednesday in the next three weeks”. A chatbot that can understand this will be perceived as more intelligent than one that cannot.

Another example and a source of user frustration is when the user wants to say “is it available in yellow?” when offered the other options like “red” and “blue”. While the chatbot has made it clear that the product is not available in yellow by not giving him/her an option, the user might still ask the question with an expectation of getting a favourable response (e.g. the product is currently out of stock and will be available in yellow in the near future).

Red or blue?

In order to keep the design complexity low, you could try to do this in two steps.

Step 1: Keep it local. Get the meaning of the user’s utterance that are in response to the chatbot’s question and ignore proactive user utterances.This reduces the number of responses users might have in a given context and thereby reduce design and programming effort.

Step 2: Once your chatbot can sense natural language locally in response to questions, try sensing NL utterances of the user when he/she is proactive and takes the initiative. There are a number of toolkits that you can use to add NL capability to your chatbot such as API.aiIBM WatsonWit.ai, etc.

Intelligent perception also involves understanding other forms of input like emoticons, emojis, gifs and images. Intelligent chatbot must make sense of the user’s intentions based on images and respond appropriately.

//ws-eu.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=GB&source=ac&ref=tf_til&ad_type=product_link&tracking_id=srinivasancj-21&marketplace=amazon&region=GB&placement=1788294661&asins=1788294661&linkId=16df5aa1336afa3092f038530525d1d0&show_border=true&link_opens_in_new_window=true&price_color=333333&title_color=0066c0&bg_color=ffffff

2. Learning

Another trait of intelligence that your chatbot can have is learning. Does your chatbot learn? Does it learn to improve its performance over time? Individual modules of your chatbot such as NL understanding and user modelling modules can learn to perform better over time using machine learning (ML) algorithms in tandem with human supervisors. A number of ML techniques are available: supervised, unsupervised, and reinforcement learning. Each of these can be realised using a variety of algorithms. For tasks like classifying user intents from user utterances, supervised learning can be used. For finding clusters of users based on their conversational behaviours, unsupervised clustering algorithms can be used. And for learning efficient and optimal conversation behaviours (i.e. what should the bot say now?), reinforcement learning algorithms can be used.

There are many ways to say the same thing (i.e. user intent). Let us assume that your chatbot recognises N different user intents. Each intent can be expressed in a number of ways. However, the initial version of a chatbot may not include all of them. And therefore it will fail to understand the user’s utterance even though it is capable of responding to the user intent expressed by the utterance. Such instances can be logged, annotated and fed into a ML module. By iterating over missed expressions using machine learning, your chatbot can learn to understand users better over time.

However, one crucial thing that you need to remember is that machine learning algorithms learn from data and experience that is made available to them. And therefore quality of such data and experience is very important. It would be a good idea to first collect enough data using a system that is hand crafted and then use machine learning algorithms to improve its performance. Some toolkits that you can use to perform ML tasks are Weka, Google’s TensorFlow.

3. Planning

The third dimension to intelligent behaviour is planning. Planning is an internal task done by chatbot to decide how to carry out the task the user has requested. For simple tasks like user surveys, there is not a lot of planning required. The bot needs to move on from one question to the next until it is all over. However, if the bot is supposed to to carry out a complex task then it needs to have the capability of finding the sequence of actions that will lead to goal set by the user. Such a sequence of actions is called a plan. These plans would also include conversational actions such as asking, informing, acknowledging, etc.

Currently, tasks are decomposed by developers themselves and ready-made plans are fed to chatbots. However if a chatbot is to multitask, then it should be able to come up with it own plans. This would save chatbot designers and developers the efforts to map every step in the potential conversational flow between users and chatbots.

To put this in perspective, a chatbot with planning capabilities will be able to come up with a sequence of actions to achieve the goal set by a user. So, if the user is asking to know about the status of a delayed delivery, the chatbot (working for a retail business) would be able to figure out which database or API to query. Once it has figured out where the information is, it will then figure out if it has all the required parameters to make such a query. It will question the user to get the missing parameters, query the database and return the information about the delayed delivery. More importantly, it will replan its action sequence if the user does not behave as it expected.

If a chatbot can figure out the steps leading to the goal by itself, it will simplify the development process greatly. Developers may then be able to focus on what the chatbot needs to do than how to do it, because it can figure it out for itself. This very problem has been the focus of a strand of AI research called AI Planning. There aren’t any easy to use AI planning toolkit that is available (as far as I know). However, many AI planning algorithms are available (e.g. STRIPS, GraphPlan). Until now, very little has been explored about the use of AI planning to dynamically generate with conversational plans but, in my opinion, it holds so much promise for the future of intelligent chatbots.

To conclude, using natural language understanding, machine learning and AI planning algorithms, chatbots can be made more intelligent than they already are. This is not to say that all chatbots need to be intelligent. But as chatbots gear up to face more users and more user-centric tasks, making them more intelligent to be able to handle both the tasks and the natural conversations with users is not going to be optional.

Try my new book on Chatbots and Conversational User Interfaces Development!