Dialogflow CX best practices - Part 2: No-matches and alternative intents

In this example, we’ll be demonstrating our simple banking chatbot. It has around 20 intents designed to handle queries relating to a bank, such as allowing the user find to out their current bank balance and get directions to their closest branch. In part 1 of this blog series, we discussed how Dialogflow CX chatbots are structured, including how only some intents will be available at each step on the flow, depending on whether a route for that intent is included on the page. While this can allow a chatbot to more strictly control a user’s journey through the conversation, it also risks some additional confusion when the chatbot doesn’t understand what the user is saying. 

When a Dialogflow CX chatbot gives a “I’m sorry I didn’t understand” type response, this could have been caused by a low confidence result or the matched intent simply not being available in this page, and this won’t be obvious from the chat logs (end user questions) alone. Here in this post, we’ll look at the possible causes of a CX chatbot not understanding an utterance and discuss what Default Negative intents and no-match events look like.


A Banking Bot in Dialogflow CX

We can illustrate how Dialogflow CX will respond to the same intent on different pages using this simple banking chatbot, made up of a single flow illustrated in Fig 1.


Figure 1: Flow in a simple banking query chatbot


 In this example, we’ll be demonstrating our simple banking chatbot. It has around 20 intents designed to handle queries relating to a bank, such as allowing the user find out their current bank balance and get directions to their closest branch. 

After the start page where the user greets the bot, the bot aims to filter the user towards either questions about their bank account or “other” queries. As shown in Fig 1, If the user is sent on the path to “Account” queries, they will still have the option of then moving on to “Other” queries before the conversation ends.

After the initial greeting, the bot asks which help the user requires in the “AccountOrOther” page in Fig 2.


Figure 2: The “AccountOrOther” page which will accept either yes or no answers


Saying “Yes” will take me to a page where I can ask questions about my bank account, whereas saying “No” will take me to the “Other” page where I can ask general questions about the bank. You’ll notice in Fig 2 that if I try to ask a question about contacting the bank while on the “AccountOrOther” page, the bot will not recognise this as the intent that handles it (other_contact), does not have a defined route on this page. 

Similarly, once I’m on the “Other” page in Fig 3, if I ask about my bank account, the bot will appear to not understand, but if I ask the earlier question about contact details again, it will give the correct answer because other_contact has a defined route on this page.


Figure 3: The “Other” page which will only accept questions outside of account queries


Alternative intents & no-match events

So long as an intent is present as a route somewhere in the current flow, Dialogflow CX will include it in its training. So, if the intent is still being recognised but not returned, then what’s going on under the hood? Let’s look at Dialogflow’s Original Response panel to see what’s happening during the interaction in Fig 2, where I triggered the other_contact intent in a page that only allowed yes or no_other responses.

"Alternative Matched Intents": [    
        "Type": "NLU",
        "Score": 0.7079071998596191,
        "Active": false,
        "DisplayName": "other_contact"
    "Execution Sequence": [
        "Step 1": {
          "Type": "INITIAL_STATE",
          "InitialState": {
            "Event": "sys.no-match-default",


So, the other_contact intent WAS recognised and returned by the bot, but it is only included in the Original Response panel as an “Alternative Matched Intent” since it does not have a corresponding route for this page. This feature can be useful for testing– allowing you to check if intents are repeatedly being detected in pages where you don’t expect them to be.

A “sys.no-match” event is what will trigger the bot to give an “I’m sorry, can you repeat that?” type response. In this case, it occurred because an intent that isn’t defined on this page was triggered, but it can happen in other circumstances as well.

If I were to ask the bot I question totally outside its subject matter, then the bot will likely return a response that’s below my bot’s confidence threshold (currently 0.3) triggering a no-match event with no alternative intent to propose:

"Step 1": {
          "Type": "INITIAL_STATE",
          "InitialState": {
            "Event": "sys.no-match-default",

"Alternative Matched Intents": []


And the bot will request another utterance, as shown in Fig 4.


Figure 4: The no-match event when the bot’s response is below the confidence threshold


In Dialogflow CX, the confidence threshold is applied on a per-flow basis and can be edited, along with the NLU settings in the “Flow Settings” page (see Fig 5).


Figure 5: The “Flow Settings” page on Dialogflow CX. The confidence threshold and machine-learning configuration can be edited from here


There will be some user interactions that you don’t want to try and classify to a route-based intent at all, and just ignore. That’s where the Default Negative intent comes in.


The Default Negative intent

There will be utterances that your bot might recognise as having language similar to your model intents, but are actually outside your subject matter. You want to minimise how many wrong answers your bot is giving, and to help you achieve this Dialogflow CX has included a Default Negative that you can train to trigger no-match events.

For example, there are many phrases that bear some resemblance to banking queries, but are outside of our target subject area. The training data in our banking bot contains: 

  1. “Can I book a mortgage assessment?”
  2. “I’d like to look at my statement” 
  3. “How much money is in my account” 

But a user might ask:

  1. Can I book a taxi?
  2. I’d like to order a pizza
  3. How much wood would a wood chuck chuck?

And the similar phrasing can throw off your bot classification, resulting in our banking chatbot returning an intent and taking the user down a route. If you want to make sure questions about taxis or pizza still trigger a no-match event rather than proceeding through the banking bot’s flow, you can add them to your Default Negative intent as illustrated in Fig 6.


Figure 6: The Default Negative intent with negative training phrases added


It should be noted that if you activate a no-match event by triggering the Default Negative intent, the resulting json will not return the Default Negative as a matched intent, but it will return a corresponding confidence.

"intentDetectionConfidence": 0.7942114,
  "languageCode": "en",
  "match": {
    "confidence": 0.7942114,
    "event": "sys.no-match-default",


When you start adding negative examples, you’re going to want to test the accuracy of your Default Negative intent much like your other standard intents to make sure they aren’t becoming confused and triggering no-matches when a match should actually occur.


Key points

When you’re assessing any errors that occur in your CX chatbot flow, be sure to consider the following:

  • The bot will give an “I don’t understand” type response to a no-match event.
  • No-match events can be triggered by the bot being below the confidence threshold, by an intent not being covered in the current page, or by an utterance matching the Default Negative intent. 
  • Which kind of error caused the no-match event can be identified by checking the json response.
  • The Default Negative intent can be retrained with negative examples to adjust what triggers a no-match, but be careful to not let this negative intent get confused with the other intents in the model.

Alison Houston

Alison is our Data Model Analyst and builds and trains chatbot models for clients. She also provides advice and troubleshooting support for clients who are struggling with the performance of their own chatbots.

Follow me on LinkedIn