How to gain a deeper understanding of your NLP engine to influence the performance of your training data

Great strides have been made in the advancement of NLP systems, but chatbot trainers still face one fundamental challenge: how to get an NLP model to perform at its best. In this blog our own expert chatbot trainer Alison Houston shows you how.

Understanding NLP working principles 

First things first: NLP doesn’t “read” and “understand” language and conversations in the same way humans have learnt to read and understand them.

It’s easy for chatbot trainers to fall into the trap of believing that because an utterance makes sense to them, their model will understand it with clarity and identify the correct intent with confidence.

NLP engines such as Lex, Dialogflow and Rasa need a qualitative approach to the training data.

You can imagine the way they work as transfer learning (this is a machine-learning method that takes a previously trained piece of information and reuses it as the basis for learning a similar piece of information).

Simply adding more and more training data to the model is not the best way to solve any weaknesses in chatbot performance.

In fact, this is more likely to result in poorer performance. It will add too much diversity, it’ll overfit or even unbalance your model, and it’ll probably become ineffective as a result of being trained on too many examples.

Carefully curated training data is one of the key attributes of good performance. But more importantly, chatbot trainers need to understand what the learning value is of each utterance they add to their model.

The optimum number of utterances is very difficult to pinpoint, because it’ll depend on a number of factors such as other intents, their “subject closeness”, their number of utterances, and so on.

But as general guidance, a good starting point is 15 to 20 utterances – but start to be cautious when you reach the 50- or 60-utterance mark. We have an existing blog on utterance generation you may find useful here.

How can you influence NLP performance? 

Broadly speaking, there are two categories of NLP engine:

1.       The ones with maximum control, where you can tune almost all parameters, control where the data is, etc. These are great, but only hard-core data scientists and development teams will make the most of them.

     Such engines also require you to manage the tech stack, and do the upgrading, scaling and hosting yourself. Rasa is one example of this category of engine. 

2.    The ones for minimum investment, provided by the most renowned NLP providers, where you benefit from the latest and most innovative advancements and improvements in NLP.

     Your only influence on performance is your training data. This category of NLP engine includes LUIS, Lex, Watson and others.

Whichever NLP engine you choose to use, your training data is key to unlocking performance.

So, you are inevitably going to wonder how to maximise the impact of your training data.

Should you repeat this concept twice?

Is five times too many?

Would three times be the optimum amount to gain maximum learning power for your model?

How many concepts can you cover in one intent before the intent is deemed too wide?

How should your utterances be structured?

Should they be as short as possible or longer, to cover more meaning? How much variance should you give to each utterance?

An experienced chatbot trainer will know the answers to all these questions if they have a true understanding of the influence and learning value their training data has on their model.

 And to do this, they use techniques to measure those performances.

How do you measure the quality of your training data?

Your training data needs to be assessed and analysed to measure its quality. So, techniques like preparing test data (also called cross-validation or blind data) are very efficient, but also time-consuming.

K-fold is not great when you build your model because the changes in the training data will create performance changes only due to the randomisation element of the K-fold algorithm. Leave-one-out is another technique I invite you to investigate.

Ultimately, you need to find a systematic way to measure your model.

Understanding the “ripple effect” is very important. The ripple effect is what happens when you modify some training data in an intent X, and you improve that intent, but the performance of other intents (A, D, F) also changes, sometimes for the better but sometimes not.

The ripple effect is due to the fact that intent-classification models tend to rely on a set amount of training data per intent and this means that each piece of training data has more influence.

The diagrams below illustrate the ripple effect, and in particular, the positive and negative effect some changes can make.

In figure 1, intent 18 is struggling to perform well. It is confused with the training data of intents 10, 15 and 21. We can see that the training data (represented by dots) is spread out, indicating that the definition is not well understood.

In figure 2, we reworked the training data and improved intent 18. We can see that the definition of that intent is narrower.

By improving intent 18, we’ve removed some confusions in intents 10, 15 and 21, even though we didn’t change their training data, so their performance has improved (a positive ripple effect).

However, if you look at intent 12, which did perform well in figure 1, it is now confused with intent 18 – this is an example of a negative ripple effect.

Figure 1:

Figure 2

 

These types of analysis are only possible with systematic testing. Finding a technique that works for you leave-one-out, or test data, or a tool to help you like QBox will dramatically improve your understanding of the performance of your model, and help you find weaknesses, analyse the reason for those weaknesses and validate your fixes. 

To find out more, visit QBox.ai.

Read through our library of useful content in the QBox blog.

Ready to give QBox a try? Get 5 free tests here.

Alison Houston

Alison is our Data Model Analyst and builds and trains chatbot models for clients. She also provides advice and troubleshooting support for clients who are struggling with the performance of their own chatbots.

Follow me on LinkedIn