Creating an intelligent ticket routing solution using Slack, Amazon AppFlow, and Amazon Comprehend
Support tickets, customer feedback forms, user surveys, product feedback, and forum posts are some of the documents that businesses collect from their customers and employees. The applications used to collect these case documents typically include incident management systems, social media channels, customer forums, and email. Routing these cases quickly and accurately to support groups best suited to handle them speeds up resolution times and increases customer satisfaction.
In traditional incident management systems internal to a business, assigning the case to a support group is either done by the employee during case creation or a centralized support group routing these tickets to specialized groups after case creation. Both of these scenarios have drawbacks. In the first scenario, the employee opening the case should be aware of the various support groups and their function. The decision to pick the right support group increases the cognitive overload on the employee opening the case. There is a chance of human error in both scenarios, which results in re-routing cases and thereby increasing the resolution times. These repetitive tasks result in a decrease of employee productivity.
Enterprises use business communication platforms like Slack to facilitate conversations between employees. This post provides a solution that simplifies reporting incidents through Slack and routes them to the right support groups. You can use this solution to set up a Slack channel in which employees can report many types of support issues. Individual support groups have their own private Slack channels.
Amazon AppFlow provides a no-code solution to transfer data from Slack channels into AWS securely. You can use Amazon Comprehend custom classification to classify the case documents into support groups automatically. Upon classification, you post the message in the respective private support channel by using Slack Application Programming Interface (API) integration. Depending on the incident management system, you can automate the ticket creation process using APIs.
When you combine Amazon AppFlow with Amazon Comprehend, you can implement an accurate, intelligent routing solution that eliminates the need to create and assign tickets to support groups manually. You can increase productivity by focusing on higher-priority tasks.
Solution overview
For our use case, we use the fictitious company AnyCorp Software Inc, whose programmers use a primary Slack channel to ask technical questions about four different topics. The programmer gets a reply with a ticket number that they can refer to for future communication. The question is intelligently routed to one of the five specific channels dedicated to each topic-specific support group. The following diagram illustrates this architecture.
The solution to building this intelligent ticket routing solution comprises four main components:
- Communication platform – A Slack application with a primary support channel for employees to report issues, four private channels for each support group and one private channel for all other issues.
- Data transfer – A flow in Amazon AppFlow securely transfers data from the primary support channel in Slack to an Amazon Simple Storage Service (Amazon S3) bucket, scheduled to run every 1 minute.
- Document classification – A multi-class custom document classifier in Amazon Comprehend uses ground truth data comprising issue descriptions and their corresponding support group labels. You also create an endpoint for this custom classification model.
- Routing controller – An AWS Lambda function is triggered when Amazon AppFlow puts new incidents into the S3 bucket. For every incident received, the function calls the Amazon Comprehend custom classification model endpoint, which returns a label for the support group best suited to address the incident. After receiving the label from Amazon Comprehend, the function using the Slack API replies to the original thread in the primary support channel. The reply contains a ticket number and the support group’s name that will address the issue. Simultaneously, the function posts the issue to the private channel associated with the support group that will address the issue.
Dataset
For Amazon Comprehend to classify a document into one of the named categories, it needs to train on a dataset with known inputs and outputs. For our use case, we use the Jira Social Repository dataset hosted on GitHub. The dataset comprises issues extracted from the Jira Issue Tracking System of four popular open-source ecosystems: the Apache Software Foundation, Spring, JBoss, and CodeHaus communities. We used the Apache Software Foundation issues, filtered four categories (GROOVY
, HADOOP
, MAVEN
, and LOG4J2
) and created a CSV file for training purposes.
- Download the data.zip
- On the Amazon S3 console, choose Create bucket.
- For Bucket name, enter
[YOUR_COMPANY]-comprehend-issue-classifier
. - Choose Create.
- Unzip the train-data.zip file and upload all files in folder into the
[YOUR_COMPANY]-comprehend-issue-classifier
bucket. - Create another bucket named
[YOUR_COMPANY]-comprehend-issue-classifier-output
.
We store the output of the custom classification model training in this folder.
Your [YOUR_COMPANY]-comprehend-issue-classifier
bucket should look like the following screenshot.
Deploying Amazon Comprehend
To deploy Amazon Comprehend, complete the following steps:
- On the Amazon Comprehend console, under Customization, choose Custom classification.
- Choose Train classifier.
- For Name, enter
comprehend-issue-classifier
. - For Classifier mode, select Using Multi-class mode.
Because our dataset has multiple classes and only one class per line, we use the multi-class mode.
- For S3 location, enter
s3://[YOUR_COMPANY]-comprehend-issue-classifier
. - For Output data, choose Browse S3.
- Find the bucket you created in the previous step and choose the
s3://[YOUR_COMPANY]-comprehend-issue-classifier-output
folder. - For IAM role, select Create an IAM role.
- For Permissions to access, choose Input and output (if specified) S3 bucket.
- For Name suffix, enter
comprehend-issue-classifier
.
- Choose Train classifier.
The process can take up to 30 minutes to complete.
- When the training is complete and the status shows as
Trained
, choose comprehend-issue-classifier. - In the Endpoints section, choose Create endpoint.
- For Endpoint name, enter
comprehend-issue-classifier-endpoint
. - For Inference units, enter
1
. - Choose Create endpoint.
- When the endpoint is created, copy its ARN from the Endpoint details section to use later in the Lambda function.
Creating a Slack app
In this section, we create a Slack app to connect with Amazon AppFlow for our intelligent ticket routing solution. For more information, see Creating, managing, and building apps.
- Sign in to your Slack workspace where you’d like to create the ticket routing solution, or create a new workspace.
- Create a Slack app named
TicketResolver
. - After you create the app, in the navigation pane, under Features, choose OAuth & Permissions.
- For Redirect URLs, enter
https://console.aws.amazon.com/appflow/oauth
. - For User Token Scopes, add the following:
-
- channels:history
- channels:read
- groups:history
- groups:read
- im:history
- im:read
- mpim:history
- mpim:read
- users:read
- For Bot Token Scopes, add the following:
- channels:history
- channels:read
- chat: write
- In the navigation pane, under Settings, choose Basic Information.
- Expand Install your app to your workspace.
- Choose Install App to Workspace.
- Follow the instructions to install the app to your workspace.
- Create a Slack channel named
testing-slack-integration
. This channel is your primary channel to report issues. - Create an additional five channels:
groovy-issues
,hadoop-issues
,maven-isssues
,log4j2-issues
,other-issues
. Mark them all as private. These will be used by the support groups designated to handle the specific issues. - In your channel, choose Connect an app.
- Connect the
TicketResolver
app you created.
Deploying the AWS CloudFormation template
You can deploy this architecture using the provided AWS CloudFormation template in us-east-1.
- Choose Launch Stack:
- Provide a stack name.
- Provide the following parameters:
CategoryChannelMap
, which is a mapping between Amazon Comprehend categories and your Slack channels in string format; for example,'{ "GROOVY":"groovy-issues", "HADOOP":"hadoop-issues", "MAVEN":"maven-issues", "LOG4J2":"log4j-issues", "OTHERS":"other-issues" }'
-
ComprehendClassificationScoreThreshold
, which can be left with default value of 0.75ComprehendEndpointArn
which is created in the previous step that looks likearn:aws:comprehend:{YOUR_REGION}:{YOUR_ACCOUNT_ID}:document-classifier-endpoint/comprehend-issue-classifier-endpoint
Region
where your AWS resources are provisioned. Default is set to us-east-1SlackOAuthAccessToken
, which is the OAuth access token on your Slack API page in the OAuth Tokens & Redirect URLs sectionSlackBotUserOAuthAccessToken
, which is the Bot user OAuth access token on your Slack API page in the OAuth Tokens & Redirect URLs section
-
SlackClientID
can be found under App Credentials section from your slack app home page as Client IDSlackClientSecret
can be found under App Credentials section from your slack app home page as Client Secret
-
SlackWorkspaceInstanceURL
which can be found by clicking the down arrow next to workspace.
-
SlackChannelID
which is the channel ID for thetesting-slack-integration
channel
-
LambdaCodeBucket
which is a bucket name where your lambda code is stored. Default is set tointelligent-routing-lambda-code
, which is the public bucket containing the lambda deployment package. If your AWS account is in us-east-1, no change is needed. For other regions, please download the lambda deployment package from here. Create a s3 bucket in your AWS account and upload the package, and change the parameter value to your bucket name.LambdaCodeKey
which is a zip file name of your lambda code. Default is set tolambda.zip
, which is the name of deployment package in the public bucket. Please revise this to your file name if you had to download and upload the lambda deployment package to your bucket in step k.
- Choose Next
- In the Capabilities and transforms section, select all three check-boxes to provide acknowledgment to AWS CloudFormation to create AWS Identity and Access Management (IAM) resources and expand the template.
- Choose Create stack.
This process might take 15 minutes or more to complete, and creates the following resources:
- IAM roles for the Lambda function to use
- A Lambda function to integrate Slack with Amazon Comprehend to categorize issues typed by Slack users
- An Amazon AppFlow Slack connection for the flow to use
- An Amazon AppFlow flow to securely connect the Slack app with AWS services
Activating the Amazon AppFlow flow
You can create a flow on the Amazon AppFlow console.
- On the Amazon AppFlow console, choose View flows.
- Choose Activate flow.
Your SlackAppFlow
is now active and runs every 1 minute to gather incremental data from the Slack channel testing-slack-integration
.
Testing your integration
We can test the end-to-end integration by typing some issues related to your channels in the testing-slack-integration
channel and waiting for about 1 minute for your Amazon AppFlow connection to transfer data to the S3 bucket. This triggers the Lambda function to run Amazon Comprehend analysis and return a category, and finally respond in the testing-slack-integration
channel and the channel with the corresponding category with a random ticket number generated.
For example, in the following screenshot, we enter a Maven-related issue in the testing-slack-integration
channel. You see a reply from the TicketResolver
app added to your original response in the testing-slack-integration
channel.
Also, you see a slack message posted in channel.
Cleaning up
To avoid incurring any charges in the future, delete all the resources you created as part of this post:
- Amazon Comprehend endpoint
comprehend-issue-classifier-endpoint
- Amazon Comprehend classifier
comprehend-issue-classifier
- Slack app
TicketResolver
- Slack channels
testing-slack-integration
,groovy-issues
,hadoop-issues
,maven-issues
,log4j2-issues
, andother-issues
- S3 bucket
comprehend-issue-classifier-output
- S3 bucket
comprehend-issue-classifier
- CloudFormation stack (this removes all the resources the CloudFormation template created)
Conclusion
In this post, you learned how to use Amazon Comprehend, Amazon AppFlow, and Slack to create an intelligent issue-routing solution. For more information about securely transferring data software-as-a-service (SaaS) applications like Salesforce, Marketo, Slack, and ServiceNow to AWS, see Get Started with Amazon AppFlow. For more information about Amazon Comprehend custom classification models, see Custom Classification. You can also discover other Amazon Comprehend features and get inspiration from other AWS blog posts about using Amazon Comprehend beyond classification.
About the Author
Shanthan Kesharaju is a Senior Architect who helps our customers with AI/ML strategy and architecture. He is an award winning product manager and has built top trending Alexa skills. Shanthan has an MBA in Marketing from Duke University and an MS in Management Information Systems from Oklahoma State University.
So Young Yoon is a Conversation A.I. Architect at AWS Professional Services where she works with customers across multiple industries to develop specialized conversational assistants which have helped these customers provide their users faster and accurate information through natural language. Soyoung has M.S. and B.S. in Electrical and Computer Engineering from Carnegie Mellon University.
Tags: Archive
Leave a Reply