Get value from every customer touchpoint using Amazon Connect as a data gathering mechanism

The recent pandemic and the impossibility of meeting customers in person has made two-way contact centers an effective tool for sales representatives to reach to customers. Amazon Connect is the ideal service to manage these contacts, and its adoption gives you the opportunity to gather new business insights. Thanks to Amazon Connect, you can program outbound calls to reach to customers and build a video contact center to enhance the customer experience.

Amazon Connect provides a unique opportunity for gathering data from these engagements and customer touchpoints that helps improve your business. With Amazon Connect, you can empower sales management with call transcriptions, sentiment analysis, recommendation systems, chatbots, integration with customer relationship management (CRM) systems, and call note search, to name a few.

In this post, we walk you through the process of configuring an Amazon Connect two-way contact center to enable call recording and transcription. We describe three use cases that use this data to provide value: proprietary sentiment analysis, intelligent call note search, and recommendation systems. We also demonstrate how to build your own applications and customize these use cases.

Call note storage and transcription, sentiment analysis, and text search are available out of the box from Connect Lens for Amazon Connect. For more information, see Real-time customer insights using machine learning with Contact Lens for Amazon Connect.

Solution overview

The following diagram illustrates the solution architecture.

The flow of this architecture is as follows:

  1. A customer calls your call center in the cloud.
  2. Amazon Connect connects the customer to an agent. As an alternative, the agent can start an outbound call to reach a customer.
  3. When the call is complete, Contact Lens starts transcribing the recorded call and runs sentiment analysis on the transcripts. It stores all artifacts in Amazon Simple Storage Service (Amazon S3) buckets.
  4. As new documents are saved to the corresponding Amazon S3 locations, two AWS Lambda functions, one for chat the other for voice contacts, extract data of interest and write the wrangled data back to Amazon S3.
  5. On some of the data the Lambda functions stored, Amazon Kendra regularly updates a search index.
  6. A similar scheduling concept is applied using an AWS Glue crawler.
  7. The crawler updates our AWS Glue Data Catalog, which makes it easy to search our terms, for example, with Amazon Athena.

Your Amazon Kendra data source is scheduled to update itself every day at 10 AM. This way, your Amazon Kendra index stays up to date. If you created the solution components after 10 AM, you can either wait until the next automatic sync, or trigger the synchronization of your data sources in your index via the Amazon Kendra console. For more information, see Using an Amazon S3 data source.

Deploy the data gathering resources

As a first step, we want to deploy all resources, except for the Amazon Connect instance, using an AWS CloudFormation template. You can do this by choosing Launch Stack:

Define the name of your new S3 bucket and project.

You’re now ready to set up Amazon Connect and the associated contact flow.

Create an Amazon Connect instance

The first step is to create an Amazon Connect instance. When you’re asked to provide a data storage location, make sure you use the S3 bucket you defined and created in your CloudFormation template.

For the rest of the setup, we use the default values, but don’t forget to create an administrator login.

After the instance is created, which can take a minute or two, we can log in to the Amazon Connect instance using the admin account created previously. We’re now ready to create our contact flow, claim a number, and attach the flow to that number.

Set up the contact flow

Before we set up our contact flow, we need to enable Contact Lens.

  1. On the Amazon Connect console, choose Analytics tools in the navigation pane.
  2. Select Enable Contact Lens.
  3. Choose Save.

For this post, we have a predefined contact flow template that you can import.

  1. Import the file contact-flow/contact-lens-transfer-flow, available in the GitHub repository.

For instructions on importing contact flows, see Import/Export contact flows.

The imported contact flow should look similar to the following.

Understanding the contact flow

The contact flow does the following:

When you enable Contact Lens in your contact flow, specify call recordings for both customer and agent. We also enable Contact Lens for speech analysis for English (US) and post-call analytics.

Claim your phone number

Claiming a number is just a few clicks away. For instructions, see Step 3: Claim a phone number. Make sure to choose and attach the previously imported contact flow while claiming the number. If no numbers are available in the country of your choice, you can raise a support ticket.

After you claim the phone number, the agent can start receiving customer calls and initiating outbound calls. The calls can be recorded, transcribed, and stored in Amazon S3. You can then use this data to provide additional value to your business, as shown in the next section.

Use cases

In this section, we describe how to use call and chat data collected by Amazon Connect in three use cases: proprietary sentiment analysis, intelligent chat and call notes search via Amazon Kendra, and recommendation systems from call notes. Example notebooks can be found here. Amazon Connect is the data gathering mechanism for all these use cases and can provide security and privacy features to meet your requirements, such as sensitive data redaction.

You can use additional AWS services with Amazon Connect to provide additional value. You can also integrate these use cases into your CRM (such as Salesforce or Zendesk) by using Amazon Connect integration features. For more information, see Set up applications for task creation.

Proprietary sentiment analysis

Contact Lens is empowered with sentiment analysis of recorded calls. This means you can review your contacts directly on the Amazon Connect console and learn how each call went.

You might have already a proprietary sentiment analysis model fine-tuned for your customers. With our solution, we collect all touch points that are also used by Contact Lens. This enables you to decide whether to use Contact Lens for customer analysis or to adopt Amazon Connect as a data gathering mechanism and build a data ingestion and inference pipeline to use your proprietary sentiment analysis model in AWS using Amazon SageMaker.

To get started, we provide a full example in a Jupyter notebook that shows you how to train and deploy your proprietary model as a SageMaker endpoint.

In the example, we process the text from customer calls and use a text classification algorithm (Object2Vect) to perform sentiment analysis. We assume that the sentiment analysis model is already available.

When training a custom classification algorithm, you need to assign labels to the input text. In our example, we use random labels to train the model. A fully operational solution has to include a custom label-gathering mechanism (for example, using Amazon SageMaker Ground Truth), but those details are beyond the scope of this post. The Object2Vect algorithm doesn’t accept text as input directly, therefore we also provide an example preprocessing notebook where raw text inputs are converted to numerical inputs suitable for processing by Object2Vect.

At the end of the training job, the trained model is available in the account, and you can deploy it, for example, as a SageMaker endpoint. For more information, see Deploy a Model in Amazon SageMaker. When the endpoint is active, you can use Lambda to send data and receive predictions. We also need to convert the text to the numerical input Object2Vect requires. In this case, we implemented a custom serializer to attach to the endpoint. The following is an example code snippet for Lambda:

from sagemaker.serializers import SimpleBaseSerializer
import sagemaker.predictor

import pickle

class O2VTextSerializer(SimpleBaseSerializer):
    # a dictionary { "word1": integer1, "word2", integer2}
    def load_vocab_to_tokens(self, file_name):
        self.vocab_to_tokens = pickle.load(open(file_name,'rb'))
    # a callable: string -> list of strings
    def set_tokenizer(self, tokenizer):
        self.tokenizer = tokenizer
    def sentence_to_tokens(self,sentence):
        """converts sentences to tokens"""
        words = self.tokenizer(sentence)
        return [ self.vocab_to_tokens[w] for w in words if w in self.vocab_to_tokens]
    def serialize(self, data):
        js = {'instances': []}
        for row in data['instances']:
            new_row = row
            if type(new_row['in0']) == str:
                new_row['in0'] = self.sentence_to_tokens(row['in0'])
            if type(new_row['in1']) == str:
                new_row['in0'] = self.sentence_to_tokens(row['in0'])
            js['instances'].append(new_row)
        return json.dumps(js)
serializer = O2VTextSerializer(content_type='application/json')

# map from words to integers, created at training time
serializer.load_vocab_to_tokens('./meta/vocab_to_token_dict.p')
# must be the same tokenizer used for training
serializer.set_tokenizer(word_tokenize)

def lambda_handler(event, context):

   text = event['text']
   # this tests if the text belongs to category 0
   # loop on all categories to get the full classification result
   label_to_test = 0
   
   predictor = sagemaker.predictor.Predictor(
                    endpoint_name=endpoint_name,
                    serializer=serializer,
                    deserializer=sagemaker.deserializers.JSONDeserializer())
   test_payload = { 'instances':
                [
                    {
                        'in0': text,
                        'in1': [label_to_test]
                    }
                ]
               }
               
   response = predictor.predict(test_payload)

Intelligent chat and call notes search via Amazon Kendra

Amazon Connect transcribed calls and chat can are searchable by speaker, keywords, sentiment score, and non-talk time. For more information, see Search conversations analyzed by Contact Lens. For an intelligent search, you can use Amazon Kendra, an intelligent search service powered by machine learning (ML). Amazon Kendra reimagines enterprise search so you can easily find the content you’re looking for.

With Amazon Kendra, you can stop searching through troves of unstructured data and discover the right answers to questions. Amazon Kendra is a fully managed service, so there are no servers to provision, and no ML models to build, train, or deploy. Furthermore, Amazon Kendra can be complemented by Amazon Translate to enable multi-language search support.

Amazon Kendra is provisioned automatically with the CloudFormation template provided in this post, and you can use it to implement intelligent search of your calls.

Recommendation systems from call notes

Sales representatives use call notes to capture meeting feedback and actions from an engagement with each customer. The call notes can either be recorded or transcribed, after which they’re saved in CRM solutions. Although the recorded call notes may include insights like objectives for the next customer interaction, follow-up plan, and so on, there is limited AI and ML involved (mainly algorithms to detect personal information), so the sales representatives have to go back to the CRM solution every time they want revisit the call notes to follow up on the conversation or the action points from the meeting. This is essential for creating continuous conversation with the customers, bridged across multiple engagements.

To further improve operational excellence, you can quickly build and integrate recommendation systems to analyze the call notes in real time and provide instant feedback and alerts for the sales representatives on the suggested next best action, which a sales representative can choose to dismiss or accept.

This solution helps strengthen relationships using AI and ML by providing a platform that enables sales representatives to have more meaningful conversations with customers beyond the product itself, for example to discuss scientific trends, new publications, and personalized recommendations based on specific customer needs and areas of interest.

You can further improve the solution with advanced AI and ML by enhancing the existing capabilities to provide insights to sales representatives on how to optimize time and customer engagement based on prior customer interactions and value-adding activities.

You can implement this using an Object2Vect model to classify the call notes, similar to what we demonstrated in the proprietary sentiment analysis use case. The categories aren’t the different sentiments, but the next best action to recommend after a given text. With this in mind, you can reuse the example notebooks and code presented earlier for this use case.

After the model is trained, you can use it as a first step in a second model, which also takes into account non-textual features as the ones coming from CRM systems (customer account data, customer segment, customer orders, and so on). The following diagram illustrates this architecture.

Clean up

To save on costs, make sure you delete all the resources you used when you don’t need them anymore:

  • SageMaker endpoints (if they were deployed)
  • CloudFormation stack
  • Amazon Connect instance

Conclusion

In this post, we demonstrated how to use Amazon Connect as an omnichannel data gathering mechanism to collect data across customer engagements such as chat and call notes, transcriptions, and recordings. We showed how to set up Amazon Connect to collect data from outbound calls. This important feature can make Amazon Connect the go-to service for sales management. Finally, we provided architectures and templates for three use cases that use the data collected by Amazon Connect: proprietary sentiment analysis, intelligent search, and recommendation systems. Try it out today and let us know what you think in the comments!


About the Authors

Michael Wallner is a Global Data Scientist with AWS Professional Services and is passionate about enabling customers on their AI/ML journey in the cloud to become AWSome. Besides having a deep interest in Amazon Connect, he likes sports and enjoys cooking

Andrea Di Simone is a Data Scientist in the Professional Services team based in Munich, Germany. He helps customers to develop their AI/ML products and workflows, leveraging AWS tools. He enjoys reading, classical music and hiking.

Daniele Angelosante is a Senior Engagement Manager with AWS Professional Services. He is passionate about AI/ML projects and products. In his free time he likes coffee, sport, soccer, and baking.

View Original Source (aws.amazon.com) Here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: AWS Machine Learning

Tags: