Semantic segmentation data labeling and model training using Amazon SageMaker

In computer vision, semantic segmentation is the task of classifying every pixel in an image with a class from a known set of labels such that pixels with the same label share certain characteristics. It generates a segmentation mask of the input images. For example, the following images show a segmentation mask of the cat label.

In November 2018, Amazon SageMaker announced the launch of the SageMaker semantic segmentation algorithm. With this algorithm, you can train your models with a public dataset or your own dataset. Popular image segmentation datasets include the Common Objects in Context (COCO) dataset and PASCAL Visual Object Classes (PASCAL VOC), but the classes of their labels are limited and you may want to train a model on target objects that aren’t included in the public datasets. In this case, you can use Amazon SageMaker Ground Truth to label your own dataset.

In this post, I demonstrate the following solutions:

  • Using Ground Truth to label a semantic segmentation dataset
  • Transforming the results from Ground Truth to the required input format for the SageMaker built-in semantic segmentation algorithm
  • Using the semantic segmentation algorithm to train a model and perform inference

Semantic segmentation data labeling

To build a machine learning model for semantic segmentation, we need to label a dataset at the pixel level. Ground Truth gives you the option to use human annotators through Amazon Mechanical Turk, third-party vendors, or your own private workforce. To learn more about workforces, refer to Create and Manage Workforces. If you don’t want to manage the labeling workforce on your own, Amazon SageMaker Ground Truth Plus is another great option as a new turnkey data labeling service that enables you to create high-quality training datasets quickly and reduces costs by up to 40%. For this post, I show you how to manually label the dataset with the Ground Truth auto-segment feature and crowdsource labeling with a Mechanical Turk workforce.

Manual labeling with Ground Truth

In December 2019, Ground Truth added an auto-segment feature to the semantic segmentation labeling user interface to increase labeling throughput and improve accuracy. For more information, refer to Auto-segmenting objects when performing semantic segmentation labeling with Amazon SageMaker Ground Truth. With this new feature, you can accelerate your labeling process on segmentation tasks. Instead of drawing a tightly fitting polygon or using the brush tool to capture an object in an image, you only draw four points: at the top-most, bottom-most, left-most, and right-most points of the object. Ground Truth takes these four points as input and uses the Deep Extreme Cut (DEXTR) algorithm to produce a tightly fitting mask around the object. For a tutorial using Ground Truth for image semantic segmentation labeling, refer to Image Semantic Segmentation. The following is an example of how the auto-segmentation tool generates a segmentation mask automatically after you choose the four extreme points of an object.

Crowdsourcing labeling with a Mechanical Turk workforce

If you have a large dataset and you don’t want to manually label hundreds or thousands of images yourself, you can use Mechanical Turk, which provides an on-demand, scalable, human workforce to complete jobs that humans can do better than computers. Mechanical Turk software formalizes job offers to the thousands of workers willing to do piecemeal work at their convenience. The software also retrieves the work performed and compiles it for you, the requester, who pays the workers for satisfactory work (only). To get started with Mechanical Turk, refer to Introduction to Amazon Mechanical Turk.

Create a labeling job

The following is an example of a Mechanical Turk labeling job for a sea turtle dataset. The sea turtle dataset is from the Kaggle competition Sea Turtle Face Detection, and I selected 300 images of the dataset for demonstration purposes. Sea turtle isn’t a common class in public datasets so it can represent a situation that requires labeling a massive dataset.

  1. On the SageMaker console, choose Labeling jobs in the navigation pane.
  2. Choose Create labeling job.
  3. Enter a name for your job.
  4. For Input data setup, select Automated data setup.
    This generates a manifest of input data.
  5. For S3 location for input datasets, enter the path for the dataset.
  6. For Task category, choose Image.
  7. For Task selection, select Semantic segmentation.
  8. For Worker types, select Amazon Mechanical Turk.
  9. Configure your settings for task timeout, task expiration time, and price per task.
  10. Add a label (for this post, sea turtle), and provide labeling instructions.
  11. Choose Create.

After you set up the labeling job, you can check the labeling progress on the SageMaker console. When it’s marked as complete, you can choose the job to check the results and use them for the next steps.

Dataset transformation

After you get the output from Ground Truth, you can use SageMaker built-in algorithms to train a model on this dataset. First, you need to prepare the labeled dataset as the requested input interface for the SageMaker semantic segmentation algorithm.

Requested input data channels

SageMaker semantic segmentation expects your training dataset to be stored on Amazon Simple Storage Service (Amazon S3). The dataset in Amazon S3 is expected to be presented in two channels, one for train and one for validation, using four directories, two for images and two for annotations. Annotations are expected to be uncompressed PNG images. The dataset might also have a label map that describes how the annotation mappings are established. If not, the algorithm uses a default. For inference, an endpoint accepts images with an image/jpeg content type. The following is the required structure of the data channels:

s3://bucket_name
    |- train
                 | - image1.jpg
                 | - image2.jpg
    |- validation
                 | - image3.jpg
                 | - image4.jpg
    |- train_annotation
                 | - image1.png
                 | - image2.png
    |- validation_annotation
                 | - image3.png
                 | - image4.png
    |- label_map
                 | - train_label_map.json
                 | - validation_label_map.json

Every JPG image in the train and validation directories has a corresponding PNG label image with the same name in the train_annotation and validation_annotation directories. This naming convention helps the algorithm associate a label with its corresponding image during training. The train, train_annotation, validation, and validation_annotation channels are mandatory. The annotations are single-channel PNG images. The format works as long as the metadata (modes) in the image helps the algorithm read the annotation images into a single-channel 8-bit unsigned integer.

Output from the Ground Truth labeling job

The outputs generated from the Ground Truth labeling job have the following folder structure:

s3://turtle2022/labelturtles/
    |- activelearning
    |- annotation-tool
    |- annotations
                 | - consolidated-annotation
                                   | - consolidation-request                               
                                   | - consolidation-response
                                   | - output
			                                  | -0_2022-02-10T17:40:03.294994.png
                                              | -0_2022-02-10T17:41:04.530266.png
                 | - intermediate
                 | - worker-response
    |- intermediate
    |- manifests
                 | - output
                                | - output.manifest

The segmentation masks are saved in s3://turtle2022/labelturtles/annotations/consolidated-annotation/output. Each annotation image is a .png file named after the index of the source image and the time when this image labeling was completed. For example, the following are the source image (Image_1.jpg) and its segmentation mask generated by the Mechanical Turk workforce (0_2022-02-10T17:41:04.724225.png). Notice that the index of the mask is different than the number in the source image name.

The output manifest from the labeling job is in the /manifests/output/output.manifest file. It’s a JSON file, and each line records a mapping between the source image and its label and other metadata. The following JSON line records a mapping between the shown source image and its annotation:

{"source-ref":"s3://turtle2022/Image_1.jpg","labelturtles-ref":"s3://turtle2022/labelturtles/annotations/consolidated-annotation/output/0_2022-02-10T17:41:04.724225.png","labelturtles-ref-metadata":{"internal-color-map":{"0":{"class-name":"BACKGROUND","hex-color":"#ffffff","confidence":0.25988},"1":{"class-name":"Turtle","hex-color":"#2ca02c","confidence":0.25988}},"type":"groundtruth/semantic-segmentation","human-annotated":"yes","creation-date":"2022-02-10T17:41:04.801793","job-name":"labeling-job/labelturtles"}}

The source image is called Image_1.jpg, and the annotation’s name is 0_2022-02-10T17:41: 04.724225.png. To prepare the data as the required data channel formats of the SageMaker semantic segmentation algorithm, we need to change the annotation name so that it has the same name as the source JPG images. And we also need to split the dataset into train and validation directories for source images and the annotations.

Transform the output from a Ground Truth labeling job to the requested input format

To transform the output, complete the following steps:

  1. Download all the files from the labeling job from Amazon S3 to a local directory:
    !aws s3 cp s3://turtle2022/ Seaturtles --recursive

  2. Read the manifest file and change the names of the annotation to the same names as the source images:
    import os
    import re
    
    label_job='labelturtles'
    manifest_path=dir_name+'/'+label_job+'/'+'manifests/output/output.manifest'
    
    file = open(manifest_path, "r") 
    txt=file.readlines()
    output_path=dir_name+'/'+label_job+'/'+'annotations/consolidated-annotation/output'
    S3_name='turtle2022/'
    im_list=[]
    for i in range(len(txt)):
        string = txt[i]
        try:
            im_name = re.search(S3_name+'(.+)'+'.jpg', string).group(1)
            print(im_name)
            im_png=im_name+'.png'
            im_list.append(im_name)
            annotation_name = re.search('output/(.+?)"', string).group(1)
            os.rename(annotation_name, im_png)
        except AttributeError:
            pass

  3. Split the train and validation datasets:
    import numpy as np
    from random import sample
      
    # Prints list of random items of given length
    train_num=len(im_list)*0.8
    test_num=len(im_list)*0.2
    train_name=sample(im_list,int(train_num))
    test_name = list(set(im_list) - set(train_name))

  4. Make a directory in the required format for the semantic segmentation algorithm data channels:
    os.chdir('./semantic_segmentation_pascalvoc_2022-01-11')
    os.mkdir('train')
    os.mkdir('validation')
    os.mkdir('train_annotation')
    os.mkdir('validation_annotation')

  5. Move the train and validation images and their annotations to the created directories.
    1. For images, use the following code:
      for i in range(len(train_name)):
          train_im=train_name[i]+'.jpg'
          train_im_path=dir_name+'/'+train_im
          train_new_path='train/'+train_im
          shutil.move(train_im_path,train_new_path) 
          
          train_annotation=train_name[i]+'.png'
          train_annotation_path=dir_name+'/labelturtles/annotations/consolidated-annotation/output/'+train_annotation
          train_annotation_new_path='train_annotation/'+train_annotation
          shutil.move(train_annotation_path,train_annotation_new_path)

    2. For annotations, use the following code:
      for i in range(len(test_name)):
          val_im=test_name[i]+'.jpg'
          val_im_path=dir_name+'/'+val_im
          val_new_path='validation/'+val_im
          shutil.move(val_im_path,val_new_path) 
          
          val_annotation=test_name[i]+'.png'
          val_annotation_path=dir_name+'/labelturtles/annotations/consolidated-annotation/output/'+val_annotation
          val_annotation_new_path='validation_annotationT/'+val_annotation
          shutil.move(val_annotation_path,val_annotation_new_path)

  6. Upload the train and validation datasets and their annotation datasets to Amazon S3:
    !aws s3 cp train s3://turtle2022/train/ --recursive
    !aws s3 cp train_annotation s3://turtle2022/train_annotation/ --recursive
    !aws s3 cp validation s3://turtle2022/validation/ --recursive
    !aws s3 cp validation_annotation s3://turtle2022/validation_annotation/ --recursive

SageMaker semantic segmentation model training

In this section, we walk through the steps to train your semantic segmentation model.

Follow the sample notebook and set up data channels

You can follow the instructions in Semantic Segmentation algorithm is now available in Amazon SageMaker to implement the semantic segmentation algorithm to your labeled dataset. This sample notebook shows an end-to-end example introducing the algorithm. In the notebook, you learn how to train and host a semantic segmentation model using the fully convolutional network (FCN) algorithm using the Pascal VOC dataset for training. Because I don’t plan to train a model from the Pascal VOC dataset, I skipped Step 3 (data preparation) in this notebook. Instead, I directly created train_channel, train_annotation_channe, validation_channel, and validation_annotation_channel using the S3 locations where I stored my images and annotations:

Train_channel=’s3://turtle2022/train’
train_annotation_channel=’s3://turtle2022/train_annotation’
validation_channel=’s3://turtle2022/validation’
validation_annotation_channel=’s3://turtle2022/validation_annotation’

Adjust hyperparameters for your own dataset in SageMaker estimator

I followed the notebook and created a SageMaker estimator object (ss_estimator) to train my segmentation algorithm. One thing we need to customize for the new dataset is in ss_estimator.set_hyperparameters: we need to change num_classes=21 to num_classes=2 (turtle and background), and I also changed epochs=10 to epochs=30 because 10 is only for demo purposes. Then I used the p3.2xlarge instance for model training by setting instance_type="ml.p3.2xlarge". The training completed in 8 minutes. The best MIoU (Mean Intersection over Union) of 0.846 is achieved at epoch 11 with a pix_acc (the percent of pixels in your image that are classified correctly) of 0.925, which is a pretty good result for this small dataset.

Model inference results

I hosted the model on a low-cost ml.c5.xlarge instance:

training_job_name = 'ss-notebook-demo-2022-02-12-03-37-27-151'
ss_estimator = sagemaker.estimator.Estimator.attach(training_job_name)
ss_predictor = ss_estimator.deploy(initial_instance_count=1, instance_type="ml.c5.xlarge")

Finally, I prepared a test set of 10 turtle images to see the inference result of the trained segmentation model:

import os

path = "testturtle/"
img_path_list=[]
files = os.listdir(path)

for file in files:
 
    if file.endswith(('.jpg', '.png', 'jpeg')):
        img_path = path + file
        img_path_list.append(img_path)

colnum=5
fig, axs = plt.subplots(2, colnum, figsize=(20, 10))

for i in range(len(img_path_list)):
    print(img_path_list[i])
    img = mpimg.imread(img_path_list[i])
    with open(img_path_list[i], "rb") as imfile:
        imbytes = imfile.read()
    cls_mask = ss_predictor.predict(imbytes)
    axs[int(i/colnum),i%colnum].imshow(img, cmap='gray') 
    axs[int(i/colnum),i%colnum].imshow(np.ma.masked_equal(cls_mask,0), cmap='jet', alpha=0.8)
    
plt.show()

The following images show the results.

The segmentation masks of the sea turtles look accurate and I’m happy with this result trained on a 300-image dataset labeled by Mechanical Turk workers. You can also explore other available networks such as pyramid-scene-parsing network (PSP) or DeepLab-V3 in the sample notebook with your dataset.

Clean up

Delete the endpoint when you’re finished with it to avoid incurring continued costs:

ss_predictor.delete_endpoint()

Conclusion

In this post, I showed how to customize semantic segmentation data labeling and model training using SageMaker. First, you can set up a labeling job with the auto-segmentation tool or use a Mechanical Turk workforce (as well as other options). If you have more than 5,000 objects, you can also use automated data labeling. Then you transform the outputs from your Ground Truth labeling job to the required input formats for SageMaker built-in semantic segmentation training. After that, you can use an accelerated computing instance (such as p2 or p3) to train a semantic segmentation model with the following notebook and deploy the model to a more cost-effective instance (such as ml.c5.xlarge). Lastly, you can review the inference results on your test dataset with a few lines of code.

Get started with SageMaker semantic segmentation data labeling and model training with your favorite dataset!


About the Author

Kara Yang is a Data Scientist in AWS Professional Services. She is passionate about helping customers achieve their business goals with AWS cloud services. She has helped organizations build ML solutions across multiple industries such as manufacturing, automotive, environmental sustainability and aerospace.

View Original Source (aws.amazon.com) Here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: AWS Machine Learning

Tags: