IBM Watson is one of the heavyweights in the chatbot business. Packed with AI and NLP technology, and armed with its Jeopardy contestant credentials, it’s certainly a very lofty product; but one that is aimed at businesses large and small.
In this step-by-step tutorial we will build our own Watson Assistant and connect it to a business app that we use in our small dev team. As you will see in this guide, there are some peculiar pitfalls that will definitely require some custom coding.
Related: Thinking of building with IBM Watson? Here’s what you should consider.
1. Defining the scope of the chatbot
First of, what should our smart assistant help our employees with? In this example, we use Atlassian Jira and start with a look at their API docs. We quickly see that most of their endpoints are centered around issues, projects and workflows.
So for our enterprise chatbot example we’ll pick the following use cases:
- Tell the user what’s on the team’s board
- Answer what tasks are due this week
- Give an overview of your issue statuses
Naturally, if used Trello, Gitlab, Monday or any other similar service you could use those too, but would refer to their respective API docs once you’re at step 7.
Related: The perfect guide to making an enterprise chatbot for your team
2. Make an IBM Cloud account
To get started with our Watson enterprise chatbot, you’ll need a free IBM Cloud account. You can create this here.
Once you’ve registered and verified your email address, you can log in and get started.
3. Create a Watson Assistant instance
Click Catalog in the top navigation, and then in the Catalog search box enter Watson Assistant. Click on the Watson Assistant card to continue.
You can now configure the instance – mainly, give it a service name so you can identify it later.
Click Create to create the instance.
We’re almost there! To start working with your new instance, click Launch Watson Assistant.
Related: 7 things to consider before building an IBM Watson assistant chatbot
4. Create an assistant
Your Watson Assistant instance can contain more than one assistant. This can be useful if you want to have separate assistants for different use cases; for example, one which handles requests for Jira, and another for Bitbucket. For now, let’s create our new Jira chatbot by clicking Create an Assistant from within the Watson Assistant tool.
From here, give your Assistant a name so you can recognize it. You can ignore the other fields.
Finally, we’ll need to create a skill. As of right now, each Watson Assistant can only have one skill. All conversations in Watson Assistant are handled by skills. These can be as simple as a fixed chat conversation, or as complex as getting information from external sources (which we’ll come back to later).
To create a skill, click Add dialog skill from the home page (you can also do this by navigating to the Skills section in the left hand navigation and selecting Create skill).
When prompted, leave Dialog skill selected and select Next.
Next, give your skill a name (and optionally a description) so you can identify it later.
Now your assistant is ready to rock. Let’s start simple by adding a welcome message.
5. Making a welcome message
We’re now presented with the Skill editor screen. There’s quite a few options in the sidebar, but for now we can ignore most of them and select Dialog. Here, we can build a basic dialog between a user and our chatbot.
This is the dialog flow. Your chatbot will start at the entry point (the top of the flow) and work it’s way down.
Click on a node to edit it. There are some nodes here by default – let’s take a look by selecting the Welcome node.
The node editor defines what your chatbot should respond with given an intent that it recognises, and what it should do next.
Things that the assistant can recognise include intents and entities. An intent is a way of describing a user’s goal, or what they’re trying to do. Entities are keywords that your assistant can pick out to use (e.g., for getting data from an external application).
welcome is a special type of intent that is triggered when the user first starts talking to the chatbot. We can use this intent to create a welcome message. There’s already a pre-defined response, but we can edit it to make it more personalized or create variations of the same response.
You can test out the skill by clicking Try it.
Next, let’s step it up a notch and create an intent, so our chatbot can recognise a user’s questions.
6. Creating simple responses
The users of our chatbot are inevitably going to want to talk to it and get a response! In order for the chatbot to recognise an input, we’ll need to create an intent. Select Intents from the left hand menu to open the intent editor, then click Create intent.
Give your intent a name that can help you identify it. In this example, let’s create an intent for when a user wants to know what your chatbot can do and call it about_chatbot.
After you click Create intent, you’ll be able to add user examples. These are sample texts of what a user could say to mean the same thing. More examples are better, but Watson Assistant will learn more possible phrasings as your team uses the chatbot more.
After entering phrase, click Add example to save it.
We can now use our intent in a dialog node. Head back to the Dialog screen and choose Add node to create a new node. Give the node a descriptive name.
In the If assistant recognizes section, click in the box and select # intents. A list of available intents will appear; choose the one you just created.
Now we can add a response to the user’s question in the Assistant responds section.
Great job! Your bot should now be able to answer a simple question about itself. Give it a go in the Try it panel. In development, you can see the intent that your chatbot has identified, as well as it’s response.
7. Connecting the chatbot to the external application
Having a chatbot that can give simple responses to simple questions is all well and good, but it’s utility to your team is limited. To really capitalize on the time-saving aspects of a chatbot, connecting your bot to an external application can save valuable time otherwise spent swapping applications, hunting through pages of the app and disseminating that data into something useful.
For this tutorial, we’re going to connect our bot to Jira, the bug tracking, issue tracking, and project management tool from Atlassian. If you use other project management software like Asana or Trello, most of the steps will be similar – refer to their developer documentation for more info.
Creating a new intent and Entities
Our goal is to have our chatbot able to respond to a query like ‘What are Tom’s issues’ or ‘Show me all tasks’, so that we can quickly see our top to-dos without having to open Jira. To handle our new request, we’ll first need to create a new intent, and then also create an entity.
Setting up a new intent is the exact same process as before; here’s the user examples I used to train my #get_issues intent.
Now we’ll need to create a new entity. Entities allow a user to include extra information about their request; in our case, that might be which project to return results from or to show only issues assigned to a certain user. To create a new entity, navigate to Entities in the left navigation and select Create Entity. Give your entity a name – I went with assignee, so we can filter issues by the assigned user.
Next, we’ll need to create a Dictionary of available entities. A Dictionary gives Watson a list of ‘things’ to look for that fall under this entity definition. As we’re looking for users, I added 5 entries to the dictionary for the members of my team; I used their name in Jira as the value (as we’ll use this later to search Jira), and added synonyms to cover what users may enter instead of the value with the same intent.
Now Watson can identify both what the users intent is, as well as what they’re talking about. Try it out in the Try it panel – your chatbot won’t know how to respond just yet, but it should recognize the intent and any entities in your request. Cool!
Authenticating with our app
Basic authentication is a way to communicate securely with your applications by using an API key to identify yourself to the application. By setting up an API key, our chatbot can make secure requests to your Jira instance.
We’re using basic authentication to keep things simple in this tutorial, but it is possible to use OAuth 2.0 with IBM Watson for more secure, token based authentication. Let us know in the comments if you’re interested in learning how!
To create an API key, log in to the API Tokens management page in Jira. It’s recommended that you create the API key when logged in as a user with permissions matching what your team will need to see – so a user who has full visibility over the projects your team works with, for example.
Select Create API token to make a new token, and give it a recognizable name. When you’re ready to continue, click Create. Your new token will be shown to you only once, so make a note of it somewhere!
Your API key is like your password. Keep it secure and don’t share it with anyone else.
Once you’ve got your API token, we’re all set to create a webhook – IBM Watson’s way of communicating with other services. We’ll create a webhook that queries Jira for issues from a given project, and optionally filters by the assignee entity we created earlier.
Creating the webhook
To create a new webhook, use the left navigation pane to go to Options > Webhooks. In the URL field, enter the URL of the API endpoint you’re calling out to. For Jira, we’ll use the search endpoint, which lets us query Jira using JQL – Jira Query Language. This is at https://xxxxx.atlassian.net/rest/api/3/search
, where xxxxx
is the name of your Jira instance.
You can read more about the Jira API and JQL on the Jira Developer Documentation website
To comply with Jira’s API spec, we’ll also need to add some additional headers; click Add Header to add the headers as I’ve done below.
Finally, we’ll need to tell Jira who we are – our trusty API key will come in useful here! Choose Add Authorization to add authentication details. Enter your email address as the username and your API key as the password, then choose Save. Your webhook should look a little like this:
Using the webhook in a dialog node
All we need to do now is tell our chatbot what to do with the webhook we created. Head back to the Dialog page and add a new node with a descriptive name.
We want the node to trigger when the chatbot recognizes the intent we created earlier, so select that as the ‘trigger’ under If assistant recognizes.
Next, click the Customize button next to the node name and toggle Webhooks to On, then click apply. Note that the side pane changes; instead of just responding, our chatbot will ‘Then callout to my webhook‘. Perfect!
The Parameters section allows you to pass parameters in the body of your webhook request. It’s beyond the remiss of this blogpost to explain the intricacies of the Jira REST API, but in our example, I’ve added the following parameters to begin with:
jql: "project = 'Global Marketing' AND status != 'Done'"
maxResults: 5
These parameters will instruct the API to return 5 full results, to limit results to one project (‘Global Marketing’) and to only show issues that aren’t marked as ‘Done’.
Now that our request is set up, we’ll need to tell our chatbot what to do with the data it receives. You’ll see that by default, the webhook response gets saved to a variable called $webhook_result_1, and that there are two prefilled Assistant responses – one for when the chatbot recognizes $webhook_result_1, and one for anything else (the error case).
It’s good practice to have your chatbot respond to the user with a message to let them know something went wrong, so make sure to fill out the error case – even if it’s a simple text response. Our main response is a little more complicated; let’s take a look.
The webhook response stored in the variable is accessible like any other object, with the exception of using SpEL expressions to show the results (simply put, you’ll need to wrap variables with <? ?> tags). In this way, by checking the results of your response (either by printing the whole thing out with your chatbot and inspecting it, or by testing the request with a tool like Postman first) you can identify the data you want to display and print it out to your user.
By setting Response variations to multiline, you can split your response over multiple lines by adding more variations. This can be useful for making your responses more legible.
In this example, I’ve printed out the total number of open issues as well as the 5 most recent open issues. The full chatbot response looks like this:
If you want to do more complicated transforms of your data before displaying it to the user, it’s recommended to use IBM Cloud Functions (or another cloud function provider) as the target for your webhook and perform the fetch to an external service manually.
Click Save to save your changes.
You should now be able to ask your chatbot to show you all issues, and get a formatted response using data from your Jira project!
Entities
Right now, our chatbot can only show us all issues, but can’t show issues for a specific user (one of the entities we defined earlier). To make this magic happen, we can extend our query parameter to include an entity in the webhook call. Let’s change the JQL parameter:
jql: "project = 'Global Marketing' AND status != 'Done' AND assignee = '@assignee'"
By referencing the assignee entity, we can make our query dynamic based on who’s issues we want to view. This could be extended further, e.g. to only show tasks or issues, or to show issues from different projects.
Now when we query our chatbot for a specific user’s issues, we get a response specific to that user!
N.B: due to JQL limitations, you would need to create two nodes for getting issues – one for getting all issues which doesn’t filter against assignee, and another for getting issues assigned to a given user which uses the entity in the query. For brevity’s sake, we’ve skipped this process.
Related: 7 things to consider before building an IBM Watson assistant chatbot
8. Deploying your chatbot
Deploying your IBM Watson chatbot is actually really simple (compared to everything else!). On your Watson Assistant dashboard, choose Add Integration to see your deployment options. Embeddable web chats and Intercom integration are paid features, but Facebook Messenger and Slack deployment is free, as well as an IBM-branded webchat for testing.
Setting up the integrations is fairly simple – the setup process is step-by-step and easy to follow. Creating an IBM-branded test chatbot takes only two clicks, and your team can start using your chatbot straight away!
Bottom line
Phew! That was involved, but we hoped you followed us to the end and made your very own connected chatbot. Here’s our key takeaways after building a chatbot with IBM Watson Assistant.
Hosted directly from IBM Watson Cloud, so no on-premise hosting required
Development process is fairly simple, user friendly and easy enough to understand
Setting up external channels is easy (though embedding your chatbot on another page is a paid feature)
Free to get started (limits do apply)
Only for developers as there are no plug-and-play integrations with other applications
Assistants can only have one skill and so are limited to what they can achieve
No Single Sign On for business applications built-in, so showing personalized lists, tasks, tickets, etc. will need even more scripting
Documentation, community, debugging tools leave much to be desired