Crafting Training Data: Art and Science vs Human

Where can you get good chatbot performance?  Well, it really is only with your training data.

It’s the only leverage you have, especially if you’re using popular NLP providers like Watson Assistant, LUIS and Dialogflow.

If you use Rasa, you can perhaps change model parameters, but ultimately it all comes down to the training data within your model, and the quality of your training data. 

And you can’t think of it as an NLP algorithm problem – the algorithms that the NLP providers use are just a black box to us, and it’s a box we’ll never get to open!  So, our training data really is the only control we have.

To understand the principles of chatbot performance, the best way is to think of it from an NLP point of view. After all, it’s not a human that will interpret the training data, it’s the NLP engine. 

From a science point of view, there is a systematic way to test your model, through k-fold and cross validation testing, and a systematic way of building your model, through intent and entity mapping. 

But there is also a bit of a dark art, for example, you may exclusively work in LUIS NLP, and you get to know what works well in your training data and what doesn’t work so well. 

And sometimes it’s difficult to explain it to someone not familiar with LUIS, or Watson. 

But for example, I’ve found working with a certain NLP provider that you have to be a little careful not to overdo the small insignificant words within an utterance, like ‘the’, ‘and’, ‘is’, as that provider tends to put almost as much weight on those words as the more significant words.

Whereas with some other NLP providers, you don’t have to be quite so careful about such details like the balance of insignificant words.

But overall, there are some basic guidelines to crafting your training data, that apply to whichever NLP provider you use. 

And if you bear these in mind when building your own chatbots, it’ll help you to create a great performing bot.

 

Using real customer logs 

The first guideline is around the use of real customer logs or questions.

Ensure they are not longwinded, too chatty or contain lots of irrelevant information.  Just extract the vital information needed to make each utterance into a brief and clearly expressed piece of training data, which covers just the subject of that intent.

For example, you’re building a banking chatbot and one of the intents covers requesting new credit card.

You wouldn’t want to include utterances like:

  • I was at my friend’s house when her dog chewed my credit card and it’s no longer working so I need a replacement one
  • My purse got stolen whilst I was at the supermarket buying a loaf of bread and some milk, and so I need a new credit card ordered to replace it.

The concept for these utterances is asking for a replacement credit card/ordering a new credit card. 

If you think of it from an NLP point of view, details about a dog chewing the card or going shopping for bread and milk are not needed, this information is not important. 

The important part is the concept, and this is what the NLP engine needs to learn, so some more suitable utterances would be: 

  • I need a replacement credit card
  • Credit card was stolen and need a new one ordered.

And then if you add a few more utterance variations to include replacement credit card and ordering a new credit card, this should then help to cover all the different ways people would use to ask about these two concepts - no matter how long winded and not to the point it is!

 

Avoid creating patterns

The second guideline is about avoiding creating patterns within an intent. 

For example, here’s an extract of some utterances in an intent about how to contact the bank for our banking model:

  • Can I have your telephone number?
  • Can I have your email address?
  • Can I have your website address?
  • Can I have your mailing address?

From an NLP point of view, these utterances would mislead the engine to think the “Can I have your” part of each phrase is the most important part of this intent, because it’s been repeated so many times. 

The danger there is it could artificially skew that intent over another.

Try to make the utterances as varied as possible. 

Better examples would be:

  • Can I have your telephone number? (it’s ok to leave one utterance like this)
  • I need the bank email address
  • Give me the website address
  • I’d like your mailing address please.

 

Entity Placement

The placement of your entities within your utterances needs to be varied so your bot understands the context. 

Try to ensure some entities fall at the beginning of the utterance, some in the middle and some towards the end, as in the below examples from our mortgage intent in our banking bot (the entity here is mortgage type, indicated in bold)

  • Tell me about the application process for a repayment mortgage
  • Is the application process of your repayment mortgages quick and simple?
  • Repayment mortgage application process information please.

 

Spelling Errors

Ensure there are no unintentional typos in your utterances.  We’ve seen a lot of client models where there are many spelling errors, and they didn’t even realise. 

Check your training data and do a spell check if necessary.

It might be good practice to include a few of the more commonly misspelt words, although some NLP providers have an autocorrect feature which can be activated anyway, so the inclusion of these misspellings wouldn’t be needed.

 

Ideal utterance amount

The next guideline is a golden question in the chatbot building world - How many utterances do I need? 

Well, most NLP providers recommend at least 5 utterances per intent, but that is the very minimum. Aiming for around 20-40 utterances works well in our experience. 

For each concept in the intent, aim for around 3 varied utterances covering that concept to ensure the learning value to the NLP engine is as strong as it can be.

 

Use a Thesaurus

And finally, you might not have thought of this, but the use of a thesaurus can be an invaluable tool, and it will help to include a variety of synonyms for the key concepts within the intents. 

For instance, referring back to our banking bot, we have an intent to cover applying for a loan. 

One key concept would be asking to lend some money.  Looking up the word lend in a thesaurus comes back with advance, give and loan (obviously), among other examples. And for money you could get back cash, funds, capital.

And all these different synonyms will help you to build a variation of utterances based on the concept of asking to lend some money.

We hope you found this useful but as ever, if you’re looking for help and guidance whether you’ve got an existing bot or are building one from scratch, QBox can offer both the technology and the support to have it functioning well and delivering more value.

To find out more, visit QBox.ai.

Read through our library of useful content in the QBox blog.

Ready to give QBox a try? Get 5 free tests here.

Alison Houston

Alison is our Data Model Analyst and builds and trains chatbot models for clients. She also provides advice and troubleshooting support for clients who are struggling with the performance of their own chatbots.

Follow me on LinkedIn