Building a smart garage door opener with AWS DeepLens and Amazon Rekognition

Many industries, including retail, manufacturing, and healthcare, are adopting IoT-enabled devices and using AI or machine learning (ML) technologies to enable such devices to make human-like decisions without human intervention. You can also apply some of the use cases involving powering IoT-enabled devices with AI/ML technologies at home.

This post showcases how to use AWS DeepLens, Amazon Rekognition, and other AWS services to recognize a car’s license plate and trigger an IoT-based garage door opener. You could apply this solution to many other use cases, for example, in manufacturing, to help control the flow of robots or packages on a production floor. In the healthcare industry, you could use the solution in a hospital to allow or deny access to staff into restricted areas based on face recognition or reading and validating a unique code on their staff security badge.

Solution overview

The following diagram illustrates the architecture of the solution.

An AWS DeepLens device enables you to run deep learning on the edge. It detects an object and runs it against an object detection model. When the model detects a car, it uploads a frame to Amazon S3. When a new image is stored in an S3 bucket, it triggers an AWS Lambda function, which initiates a call to Amazon Rekognition to compare the license plate to a list of allowed values in an Amazon DynamoDB table. If the function locates the license plate, it retrieves third-party API secrets from AWS Secrets Manager and triggers a third-party API to open the garage door.

You may already have an IoT-enabled garage door, and most garage door openers provide some sort of API that allows opening and closing garage doors programmatically. Because of this, this post assumes you have an existing IoT-based garage door opener rather than build one from scratch.

This project uses the following AWS services:

  • AWS DeepLens – A fully programmable, deep learning-enabled video camera, which is optimized for Apache MXNet, TensorFlow, and Caffe. You can train your models in Amazon SageMaker, a ML platform to build, train, optimize, and host your models, and deploy into AWS DeepLens.
  • Amazon S3 – An object storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low cost.
  • Lambda – An event-driven, serverless computing platform that runs code in response to events and automatically manages the computing resources the code requires.
  • Amazon Rekognition – An image recognition deep learning service that detects objects, scenes, and faces; extracts text; recognizes celebrities; and identifies inappropriate content in images.
  • DynamoDB – A fully managed NoSQL database service that supports key-value and document data structures.
  • Secrets Manager – A secrets management service that protects access to your applications, services, and IT resources. This service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.

To implement the solution, complete the following steps:

  1. Deploy a sample object detection project
  2. Change the AWS DeepLens inference Lambda function
  3. Register your license plate in DynamoDB
  4. Store third-party API credentials in AWS Secrets Manager
  5. Create a Lambda function
  6. Test the system

Deploying a sample object detection project

To deploy the project, complete the following steps:

  1. Register your AWS DeepLens device.
  2. On the AWS DeepLens console, create a new project.
  3. For Project type, select Use a project template.
  4. For Project templates, select Object detection.
  5. Choose Next.
  6. Name the project car-license-plate-detector.
  7. Choose Create.

After you create the project, you deploy it to the AWS DeepLens device.

  1. Under Projects, choose the project you just created.
  2. Choose Deploy to device.
  3. On the Target device page, choose your registered AWS DeepLens device.
  4. Choose Review.
  5. Review the policy and choose Deploy.

The project takes up to a few minutes to deploy to AWS DeepLens.

  1. On the IAM console, add permissions for AmazonS3FullAccess to AWSDeepLensGreengrassGroupRole.

You can use AWS DeepLens output streams to make sure that your project has successfully deployed into AWS DeepLens.

Changing the AWS DeepLens inference Lambda function

After you deploy the sample object detection project into AWS DeepLens, you need to change the inference (edge) Lambda function to upload image frames to Amazon S3. Complete the following steps:

  1. Create an S3 bucket for the images.
  2. Use the default settings when creating the bucket and make sure to choose the same Region in which you configured your AWS DeepLens device.
  3. On the Lambda console, choose the deeplens-object-detection function.
  4. Remove the function code and replace it with the deeplens_lambda code from the GitHub repo.
  5. Replace bucket_name with your bucket name.

This step changes the inference code to upload images every 30 seconds to Amazon S3 when a car is detected.

If car is detected, a frame is captured and saved to the S3 bucket by calling the function push_to_s3. See the following code:

if(detectedCar):
                rfr = cv2.resize(frame, (672, 380))
                push_to_s3(rfr)
                time.sleep(30)

The function is defined as follows:

def push_to_s3(img):
    try:
        index = 0

        timestamp = int(time.time())
        now = datetime.datetime.now()
        key = "car_{}_{}_{}_{}_{}_{}.jpg".format(now.month, now.day,
                                                   now.hour, now.minute,
                                                   timestamp, index)

        s3 = boto3.client('s3')

        encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 90]
        _, jpg_data = cv2.imencode('.jpg', img, encode_param)
        response = s3.put_object(ACL='private',
                                 Body=jpg_data.tostring(),
                                 Bucket=bucket_name,
                                 Key=key)

        client.publish(topic=iot_topic, payload="Response: {}".format(response))
        client.publish(topic=iot_topic, payload="Frame pushed to S3")
    except Exception as e:
        msg = "Pushing to S3 failed: " + str(e)
        client.publish(topic=iot_topic, payload=msg)
  1. Save the Lambda function and publish a new version of the code.

You can now go to your AWS DeepLens project and update the function on the device.

  1. On the AWS DeepLens console, choose your project.
  2. For Version, choose the latest version of your Lambda function.
  3. For Timeout, enter 300.

  1. Choose Save.
  2. Under Projects, choose the project.
  3. Choose Deploy.

The project can take up to a few minutes to deploy.

Registering your license plate in DynamoDB

In this step, you populate your license plate number to an DynamoDB table. You can either do this step manually using aws-cli from the GitHub repo or you can have a simple web application on Amazon S3 that uses Amazon API Gateway and Lambda to insert the item in the table. This post uses aws-cli to insert the item. You can complete this step from your local computer, but you need to have AWS CLI installed and configured in your machine. For more information, see What Is the AWS Command Line Interface?

Complete the following steps:

  1. Create a DynamoDB table.
  2. For Table name, enter CarInfo.
  3. For Partition key, enter LicensePlate.
  4. Choose Create.

You can also complete the preceding steps with AWS CLI by entering the following code from your terminal:

aws dynamodb create-table --table-name CarInfo --attribute-definitions AttributeName=LicensePlate,AttributeType=S --key-schema AttributeName=LicensePlate,KeyType=HASH --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1

Then insert your own license plate number into the CarInfo table. (Replace the LicensePlate variable with your car license plate number.) See the following code:

aws dynamodb put-item --table-name CarInfo --item '{"LicensePlate": {"S": "YOUR OWN LICENSE PLATE NUMBER"}}' --return-consumed-capacity TOTAL

Storing API credentials in Secrets Manager

In this step, you use Secrets Manager to store third-party API credentials to protect the secrets. Complete the following steps:

  1. On the IAM console, under Roles, choose Create role.
  2. For AWS Service, choose Lambda.
  3. Choose Next: Permissions.
  4. Attach the following managed policies:
    • SecretsManagerReadWrite
    • AmazonS3FullAccess
    • AmazonRekognitionReadOnlyAccess
    • AmazonDynamoDBReadOnlyAccess
    • CloudWatchLogsFullAccess

You use this role later when you create the Lambda function.

  1. On the Secrets Manager console, choose Store a new secret.
  2. For Select secret type, select Other type of secrets.
  3. Under Secret key/value, enter the key username and value 3rd party API user name.
  4. Enter the key password and value 3rd party API password.
  5. Leave other fields at their default.

  1. Proceed to store the secrets.
  2. After you create the secret, record the secret ARN, which is passed later as an environment variable for the Lambda function.

Creating a new Lambda function

Create a new (cloud) Lambda function on the AWS side to call the Amazon Rekognition API to detect the license plate number and verify it with the DynamoDB table. If it finds a match, it calls the third-party API to open the garage door. Complete the following steps:

  1. On the Lambda console, create a new function called License-Plate-Match-cloud.
  2. For Runtime, choose Python 3.7.
  3. Under Permission, choose Use an existing role.
  4. Choose the role you created.
  5. Remove the function code and replace it with the License-Plate-Match-cloud code on the GitHub repo.

Different license plates have different formats. You may need to adjust the PlateNumber code. The following example code reads the plate number from id[1] from the Amazon Rekognition response:

response = rekognition.detect_labels(Image=image, MaxLabels=20, MinConfidence=50)
    for object in response["Labels"]: //if license plate in amongst the 20 labels then I want to read it
        if object["Name"] == "License Plate":
            plate_detected = True
            break
    if plate_detected:
        #adjust the following code based on license plate format  
        PlateNumber = rekognition.detect_text(Image=image)
	  confidence_score = PlateNumber['TextDetections'][1]['Confidence']
        PlateNumber = PlateNumber['TextDetections'][1]['DetectedText'] //based on license plate format specify which entry in the license needs to be read
        PlateNumber = re.sub('[^a-zA-Z0-9 n.]', '', PlateNumber).replace(" ","")
        print (PlateNumber)
	  print (confidence_score)

if confidence_score > confidence_threshold:
     print ('Confidence threshold matched')
     match = match_plate("CarInfo", "LicensePlate", PlateNumber)
else:
      print('Confidence threshold did not matched')
      return 0

This code uses a confidence score, which is a number between 0 and 100 that indicates the probability that a given prediction is correct. For this post, you use “MinConfidence” as 50 and detecting the objects and “confidence_threshold” as 70, while detecting the text from an image to discard any false positives results. The optimum threshold depends on the application. In many cases, you get the best user experience by setting the minimum confidence values higher than the default value.

This post also adds a third-party module called myq.py, which you need to port in lambda_function.py as import myq (see the following screenshot). This myq.py file can vary depending on what garage opener you use. The module retrieves third-party API secrets from Secrets Manager and calls the respective company’s API to open or close the garage door. You can download the module code from the GitHub repo.

  1. Under Environment variables, enter the following keys and values:
    • For the key SECRET_NAME, enter the value of the secret ARN that you created.
    • For the key APP_ID, enter the third-party garage door opener device app ID.
    • For the key DEVICE_LIST_ENDPOINT, enter the third-party garage door opener list endpoint.
    • For the key DEVICE_SET_ENDPOINT, enter the third-party garage door opener set endpoint.
    • For the key BASE_URI, enter the third-party garage door opener base URI.
    • For the key BASE_ENDPOINT, enter the third-party garage door opener base endpoint.
  1. Change the Timeout parameter value to 15 minutes.

You also need to have a trigger set up for this function on top of the S3 bucket that you created. Whenever AWS DeepLens loads new images into the S3 bucket, it triggers the function.

  1. Create a new Lambda layer for the Python requests module, which is used in the myq.py file to serve HTTP requests.

For more information, see AWS Lambda Layers.

This step is required because AWS is removing the vendored version of requests from Botocore. For more information, see Removing the vendored version of requests from Botocore.

  1. To create the requests Lambda layer, enter the following code:
    mkdir -p temp/python && cd temp/python 
    pip3 install requests -t . 
    cd ..
    zip -r9 ../requests.zip .
    aws lambda publish-layer-version --layer-name requests 
          --description "requests package" 
          --zip-file fileb://../requests.zip 
          --compatible-runtimes python3.7

After you create the layer, add the layer to the License-Plate-Match-cloud function.

  1. On the Lambda console, choose your function.
  2. Choose Layers.
  3. Choose Add a Layer.
  4. From the drop-down menu, choose the requests layer.
  5. Choose Add.
  6. Choose Save.

Testing the system

You are now ready to test the system. The following video is an example of the system in action.

Conclusion

This post demonstrated how to use ML at the edge to solve a real-life problem. With the help of AWS DeepLens, you can detect a car object, capture its frame, and upload it to an S3 bucket. You can use Lambda to call Amazon Rekognition to read the license plate and check if it exists in a DynamoDB table, and make a third-party API call to open the garage door.

This project showcases the power of the AWS DeepLens device in introducing developers to ML and IoT. You can build a similar project in a short amount of time. Please share your experiences and any questions in the comments.


About the Authors

Amit Mukherjee is a Partner Solutions Architect with AWS. He provides architectural guidance to help partners achieve success in the cloud. He has special interest in AI and machine learning. In his spare time, he enjoys spending quality time with his family.

 

 

 

Georges Leschener is a Partner Solutions Architect in the Global System Integrator (GSI) team at Amazon Web Services. He works with GSI partners to help migrate customers’ workloads to AWS Cloud, and designs and architects innovative solutions on AWS by applying AWS best practices.

 

 

 

 

View Original Source (aws.amazon.com) Here.

Leave a Reply

Your email address will not be published. Required fields are marked *

Shared by: AWS Machine Learning

Tags: