Announcing the launch of Amazon Comprehend custom entity recognition real-time endpoints

Amazon Comprehend is a natural language processing (NLP) service that can extract key phrases, places, names, organizations, events, sentiment from unstructured text, and more (for more information, see Detect Entities). But what if you want to add entity types unique to your business, like proprietary part codes or industry-specific terms? In November 2018, Amazon Comprehend added the ability to extend the default entity types to detect custom entities.

Until now, inference with a custom entity recognition model was an asynchronous operation.

In this post, we cover how to build an Amazon Comprehend custom entity recognition model and set up an Amazon Comprehend Custom Entity Recognition real time endpoint for synchronous inference. The following diagram illustrates this architecture.

Solution overview

Amazon Comprehend Custom helps you meet your specific needs without requiring machine learning (ML) knowledge. Amazon Comprehend Custom uses automatic ML (AutoML) to build customized NLP models on your behalf, using data you already have.

For example, if you’re looking at chat messages or IT tickets, you might want to know if they’re related to an AWS offering. You need to build a custom entity recognizer that can identify a word or a group of words as a SERVICE or VERSION entity from the input messages.

In this post, we walk you through the following steps to implement a solution for this use case:

  1. Create a custom entity recognizer trained on annotated labels to identify custom entities such as SERVICE or VERSION.
  2. Create a real-time analysis Amazon Comprehend custom entity recognizer endpoint to identify the chat messages to detect a SERVICE or VERSION entity.
  3. Calculate the inference capacity and pricing for your endpoint.

We provide a sample dataset aws-service-offerings.txt. The following screenshot shows example entries from the dataset.

You can provide labels for training a custom entity recognizer in two different ways: entity lists and annotations. We recommend annotations over entity lists because the increased context of the annotations can often improve your metrics. For more information, see Improving Custom Entity Recognizer Performance. We preprocessed the input dataset to generate training data and annotations required for training the custom entity recognizer.

You can download these files below:

After you download these files, upload them to an Amazon Simple Storage Service (Amazon S3) bucket in your account for reference during training. For more information about uploading files, see How do I upload files and folders to an S3 bucket?
For more information about creating annotations or labels for your custom dataset, see Developing NER models with Amazon SageMaker Ground Truth and Amazon Comprehend.

Creating a custom entity recognizer

To create your recognizer, complete the following steps:

  1. On the Amazon Comprehend console, create a custom entity recognizer.
  2. Choose Train recognizer.
  3. For Recognizer name, enter aws-offering-recognizer.
  4. For Custom entity type, enter SERVICE.
  5. Choose Add type.
  6. Enter a second Custom entity type called VERSION.
  7. For Training type, select Using annotations and training docs.
  8. For Annotations location on S3, enter the path for annotations.csv in your S3 bucket.
  9. For Training documents location on S3, enter the path for train.csv in your S3 bucket.
  10. For IAM role, select Create an IAM role.
  11. For Permissions to access, choose Input and output (if specified) S3 bucket.
  12. For Name suffix, enter ComprehendCustomEntity.
  13. Choose Train.

For our dataset, training should take approximately 10 minutes.

When the recognizer training is complete, you can review the training metrics in the Recognizer details section.

Scroll down to see the individual training performance.

For more information about understanding these metrics and improving recognizer performance, see Custom Entity Recognizer Metrics.

When training is complete, you can use the recognizer to detect custom entities in your documents. You can quickly analyze single documents up to 5 KB in real time, or analyze a large set of documents with an asynchronous job (using Amazon Comprehend batch processing).

Creating a custom entity endpoint

Creating your endpoint is a two-step process: building an endpoint and then using it by running a real-time analysis.

Building the endpoint

To create your endpoint, complete the following steps:

  1. On the Amazon Comprehend console, choose Customization.
  2. Choose Custom entity recognition.
  3. From the Recognizers list, choose the name of the custom model for which you want to create the endpoint and follow the link. The endpoints list on the custom model details page is displayed. You can also see previously created endpoints and the models they’re associated with.
  4. Select your model.
  5. From the Actions drop-down menu, choose Create endpoint.
  6. For Endpoint name, enter DetectEntityServiceOrVersion.

The name must be unique within the AWS Region and account. Endpoint names have to be unique even across recognizers.

  1. For Inference units, enter the number of inference units (IUs) to assign to the endpoint.

We discuss how to determine how many IUs you need later in this post.

  1. As an optional step, under Tags, enter a key-value pair as a tag.
  2. Choose Create endpoint.

The Endpoints list is displayed, with the new endpoint showing as Creating. When it shows as Ready, you can use the endpoint for real-time analysis.

Running real-time analysis

After you create the endpoint, you can run real-time analysis using your custom model.

  1. For Analysis type, select Custom.
  2. For Endpoint, choose the endpoint you created.
  3. For Input text, enter the following:
    AWS Deep Learning AMI (Amazon Linux 2) Version 220 The AWS Deep Learning AMIs are prebuilt with CUDA 8 and several deep learning frameworks.The DLAMI uses the Anaconda Platform with both Python2 and Python3 to easily switch between frameworks.
    

  4. Choose Analyze.

You get insights as in the following screenshot, with entities recognized as either SERVICE or VERSION and their confidence score.

You can experiment with different input text combinations to compare and contrast the results.

Determining the number of IUs you need

The number of IUs you need depends on the number of characters you send in your request and the throughput you need from Amazon Comprehend. In this section, we discuss two different use cases with different costs.

In all cases, endpoints are billed in 1-second increments, with a minimum of 60 seconds. Charges continue to incur from the time you provision your endpoint until it’s deleted, even if no documents are analyzed. For more information, see Amazon Comprehend Pricing.

Use case 1

In this use case, you receive 10 messages/feeds every minute, and each message is comprised of 360 characters that you need to recognize entities for. This equates to the following:

  • 60 characters per second (360 characters x 10 messages ÷ 60 seconds)
  • An endpoint with 1 IU provides a throughput of 100 characters per second

You need to provision an endpoint with 1 IU. Your recognition model has the following pricing details:

  • The price for 1 IU is $0.0005 per second
  • You incur costs from the time you provision your endpoint until it’s deleted, regardless of how many inference calls are made
  • If you’re running your real-time endpoint for 12 hours a day, this equates to a total cost of $21.60 ($0.0005 x 3,600 seconds x 12 hours) for inference
  • The model training and model management costs are the same as for asynchronous entity recognition at $3.00 and $0.50, respectively

The total cost of an hour of model training, a month of model management, and inference using a real-time entity recognition endpoint for 12 hours a day is $25.10 per day.

Use case 2

In this second use case, your requirement increased to run inference for 50 messages/feeds every minute, and each message contains 600 characters that you need to recognize entities for. This equates to the following:

  • 500 characters per second (600 characters x 50 messages ÷ 60 seconds)
  • An endpoint with 1 IU provides a throughput of 100 characters per second.

You need to provision an endpoint with 5 IU. Your model has the following pricing details:

  • The price for 1 IU the $0.0005 per second
  • You incur costs from the time you provision your endpoint until it’s deleted, regardless of how many inference calls are made
  • If you’re running your real-time endpoint for 12 hours a day, this equates to a total cost of $108 (5 x $0.0005 x 3,600 seconds x 12 hours) for inference
  • The model training and model management costs are the same as for asynchronous entity recognition at $3.00 and $0.50, respectively

The total cost of an hour of model training, a month of model management, and inference using a real-time entity recognition endpoint with a throughput of 5 IUs for 12 hours a day is $111.50.

Cleaning up

To avoid incurring future charges, stop or delete resources (the endpoint, recognizer, and any artifacts in Amazon S3) when not in use.

To delete your endpoint, on the Amazon Comprehend console, choose the entity recognizer you created. In the Endpoints section, choose Delete.

To delete your recognizer, in the Recognizer details section, choose Delete.

For instructions on deleting your S3 bucket, see Deleting or emptying a bucket.

Conclusion

This post demonstrated how easy it is to set up an endpoint for real-time text analysis to detect custom entities that you trained your Amazon Comprehend custom entity recognizer on. Custom entity recognition extends the capability of Amazon Comprehend by enabling you to identify new entity types not supported as one of the preset generic entity types. With Amazon Comprehend custom entity endpoints, you can now easily derive real-time insights on your custom entity detection models, providing a low latency experience for your applications. We’re interested to hear how you would like to apply this new feature to your use cases. Please share your thoughts and questions in the comments section.


About the Authors

Mona Mona is an AI/ML Specialist Solutions Architect based out of Arlington, VA. She works with the World Wide Public Sector team and helps customers adopt machine learning on a large scale. She is passionate about NLP and ML explainability areas in AI/ML.

 

 

 

 

Prem Ranga is an Enterprise Solutions Architect based out of Houston, Texas. He is part of the Machine Learning Technical Field Community and loves working with customers on their ML and AI journey. Prem is passionate about robotics, is an autonomous vehicles researcher, and also built the Alexa-controlled Beer Pours in Houston and other locations.

 

 

View Original Source (aws.amazon.com) Here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: AWS Machine Learning

Tags: