The Paradox of Conversational Design

(First published on ChatbotsLife in July 2017)
Chatbots are evolving at a rapid pace. Their evolution is fuelled by the huge amount of interest in getting them to be used as buddies, assistants, companions, and what not (probably even adversaries) in various conversational settings. However as we embark on the development process, what stares us in our faces is the paradox of conversational design.

Conversational design is the art of designing templates for conversations that the chatbot is designed to have with its users. This is very much a part of the great user experience (UX) that we promise users along their journey with your organisation.

Conversation design includes templates for ideal conversations between the chatbot and its users. But that’s not enough, it needs to also tell the chatbot how to handle exceptions: what if the conversation derails due to some reason. Take a very simple future conversation between your chatbot and a user. Figure out all the points in the conversation where things can go in unexpected ways.

For instance, if the user seeks information (e.g. flights between a and b) and the database where this information is stored is not contactable. Another instance would be when the user’s request is ambiguous. Yet another would be, when the user misunderstands a chatbot response.

You are now beginning to realise the overwhelming number of non-ideal conversations that can happen between the chatbot and its users. How do we actually brainstorm all the possibilities at each point of the conversation? It may be technically possible to enumerate the paths that the chatbot might have to take (say due to database and other technical problems, etc) but to gauge the user’s possible responses at each step is definitely not a trivial task.

One way to address this problem would be to look at data of how users behaved and reacted in previous conversations and derive the best strategies from experience. However, does such data exist? Human-human conversations between users and live human operators/agents/advisors are not really well suited for this exercise as a human capability to understand the nuances of natural language conversation is much far ahead of chatbots.

To learn the rules of engagement from live human agents and to mimic it in chatbots is not easy. We may be ok to consider a subset of that conversation, but even to separate the simple mimic-able exchanges from the rest is a difficult task. To gather data, we therefore need to dumb down a human operator to the level of a chatbot and make them have conversations with users. Such data, by implication, cannot come from the past. Its in the future.

Another way of doing this would be to build a chatbot with the expertise we have got and have them chat with users. By doing this we can collect data and have a look at where the chatbot has failed to do a great job. This will lead to iterative improvements in the chatbot’s conversational capability. However, the risk is that you will disappoint a lot of early users with a chatbot with mediocre conversational strategies.

Therefore the paradox is:

We need data to learn great conversational strategies. But to generate such data we need to build a chatbot with great conversational strategies.

The question is, how we get past this paradox to create great conversational experiences to chatbot users?! Please do share your valuable thoughts below. 🙂

Try my new book on Chatbots and Conversational User Interfaces Development!

Advertisements