Common Chatbot Myths Debunked
Chatbot use is on the rise and is also a constant hot topic of debate in the technology world. Are they easy to build? Do end users find them useful? But amongst the debates, there are a lot of myths surrounding chatbots that Alison Houston has made the time to debunk.
Myth 1. Chatbots are quick and easy to build
Well, yes, it is easy to build a “crapbot”, one that’s poor quality and isn’t meeting customer requirements. But to build a good quality chatbot that’s fit for purpose and is excelling in performance by meeting customer requirements at least 90% of the time, is a difficult task.
You need to start with a solid foundation and the best way to do this is to do an intent and entity mapping exercise. This involves mapping each question that the chatbot should answer to a category using excel, then sub- categories will start to evolve, creating potential intents and entities.
As you build your map, consider how questions will be asked, and if they will be asked in a similar way, look to group them within the same intent and use entities for the variables.
Alison has written an article which goes into more detail on intent/entity mapping, and you can find this post on our QBox Blog.
Once you’ve built this solid foundation for your chatbot, the work doesn’t stop there.
There will be plenty of rounds of testing and training, and you may need to revisit your mapping document and make changes to the structure, either by merging intents that might be covering similar subject areas and are getting confused with another, separating intents that may be getting to large and unwieldy to handle efficiently, or perhaps by adding more entities.
All of this takes a lot of time, and we’re talking weeks or even perhaps months to perfect if it’s a large model.
So, myth 1 debunked, to build a good quality chatbot is not a quick and certainly not easy to build process!
Myth 2. You can easily take real user questions and put them directly into your chatbot as training data
We can all easily fall into the trap of using real user questions which may be far too long and/or contain irrelevant items. It’s great to use these real user questions if you’re lucky enough to have them, however they’re not always good quality.
Users tend to be very chatty when using chatbots and sometimes they don’t get straight to the point of their intent. When using real questions, curate each one by cutting the waffle and making each one into a brief and clearly expressed piece of training data.
Consider each question carefully before adding it as training data and ask yourself ‘Is this a smart piece of training data that represents the intent, will typically be asked by my users, and will provide valuable learning to my model?’.
If the answer is no to any of these points, take action, either curate the question or dismiss it completely!
Myth 2 debunked, a real user question is not usually a valuable piece of training data without curation!
Myth 3. Chuck more and more training data at your chatbot if it isn’t working well
We often see client chatbot models where they have a couple of intents that have many more training data in them compared to the rest of the intents in their model, and it’s usually because the chatbot is struggling to return the correct intent prediction for those huge intents.
Chatbot builders tend to think that by adding more and more training data to those struggling intents, they will eventually solve the issue. But struggling intents are usually caused by specific problems in the existing training data within the intent.
The simple fix of adding more training data will not generally rectify those problems, and in fact could be even more detrimental to the overall performance of the chatbot.
Specific problems could be the concepts within the intent are not being represented very strongly, or perhaps you have a couple of intents that are very similar in subject areas and are battling with each other, it could even be the intent is trying to cover too many subject areas.
Whatever the reason, rather than just chucking more and more training data at the intent in the hope of making it more responsive, it is better to do a deep analysis of the intent and consider making improvements to the existing training data before adding new data.
We’ve another article in our blog library on the six common problems that could cause poorly performing intents.
Myth 3 debunked, get into good habits of making improvements to your existing training data rather than adding new training data!
Myth 4. Adding training data with very little variation is valuable to help with underperforming intents
There are plenty of tools available to chatbot builders to help expand their training data set.
But quite often, we see client models who have intents with many lines of utterances that are all very similar, with just perhaps one-word changes from one to another. It’s usually because they’ve used an utterance generator tool.
For example, there might be a chitchat intent covering questions about the weather, and I see such utterances as:
- Can I have the weather forecast
- Can I have the weather forecast please
- Please can I have the weather forecast
- Could I have the weather forecast
- Could I have the weather forecast please
- Please could I have the weather forecast
You get the picture.
By adding one of these utterances to the weather intent, of course there’s valuable learning to the chatbot. But adding even just two or three more of these utterances, it’s just not adding any great learning value to the chatbot.
In fact, it could end up being detrimental because it would start to introduce patterns or formulaic phrases within the intent.
This is where you have a group of words, generally made up of the more insignificant words, that is being repeated many times in your utterances. In the examples above, it would be the “can I have” and “could I have”.
This would start to teach your model that the most important part of this intent is those “can I have” and “could I have” patterns, because they’ve been repeated so many times. And the danger there is the potential to artificially skew that intent over another.
Myth 4 debunked, always make great efforts to make utterances as varied as possible.
And if your utterance generator tool is spewing out unimaginative training data, it might be time to ditch it and use something a little more intelligent!
If you struggle, use a tool like QBox - which can generate useful and varied training data, based on existing training data for individual intents. It also helps to deeply analyse my existing training data, providing me with helpful information to improve the data, and to make my chatbots more efficient and intelligent.
To find out more, visit QBox.ai.
Read about our newest feature, Word Density.
Watch the ‘How to fix’ series by Alison starting with How to fix your MS LUIS chatbots.