Building a smart garage door opener with AWS DeepLens and Amazon Rekognition
Many industries, including retail, manufacturing, and healthcare, are adopting IoT-enabled devices and using AI or machine learning (ML) technologies to enable such devices to make human-like decisions without human intervention. You can also apply some of the use cases involving powering IoT-enabled devices with AI/ML technologies at home.
This post showcases how to use AWS DeepLens, Amazon Rekognition, and other AWS services to recognize a car’s license plate and trigger an IoT-based garage door opener. You could apply this solution to many other use cases, for example, in manufacturing, to help control the flow of robots or packages on a production floor. In the healthcare industry, you could use the solution in a hospital to allow or deny access to staff into restricted areas based on face recognition or reading and validating a unique code on their staff security badge.
Solution overview
The following diagram illustrates the architecture of the solution.
An AWS DeepLens device enables you to run deep learning on the edge. It detects an object and runs it against an object detection model. When the model detects a car, it uploads a frame to Amazon S3. When a new image is stored in an S3 bucket, it triggers an AWS Lambda function, which initiates a call to Amazon Rekognition to compare the license plate to a list of allowed values in an Amazon DynamoDB table. If the function locates the license plate, it retrieves third-party API secrets from AWS Secrets Manager and triggers a third-party API to open the garage door.
You may already have an IoT-enabled garage door, and most garage door openers provide some sort of API that allows opening and closing garage doors programmatically. Because of this, this post assumes you have an existing IoT-based garage door opener rather than build one from scratch.
This project uses the following AWS services:
- AWS DeepLens – A fully programmable, deep learning-enabled video camera, which is optimized for Apache MXNet, TensorFlow, and Caffe. You can train your models in Amazon SageMaker, a ML platform to build, train, optimize, and host your models, and deploy into AWS DeepLens.
- Amazon S3 – An object storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low cost.
- Lambda – An event-driven, serverless computing platform that runs code in response to events and automatically manages the computing resources the code requires.
- Amazon Rekognition – An image recognition deep learning service that detects objects, scenes, and faces; extracts text; recognizes celebrities; and identifies inappropriate content in images.
- DynamoDB – A fully managed NoSQL database service that supports key-value and document data structures.
- Secrets Manager – A secrets management service that protects access to your applications, services, and IT resources. This service enables you to easily rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle.
To implement the solution, complete the following steps:
- Deploy a sample object detection project
- Change the AWS DeepLens inference Lambda function
- Register your license plate in DynamoDB
- Store third-party API credentials in AWS Secrets Manager
- Create a Lambda function
- Test the system
Deploying a sample object detection project
To deploy the project, complete the following steps:
- Register your AWS DeepLens device.
- On the AWS DeepLens console, create a new project.
- For Project type, select Use a project template.
- For Project templates, select Object detection.
- Choose Next.
- Name the project
car-license-plate-detector
. - Choose Create.
After you create the project, you deploy it to the AWS DeepLens device.
- Under Projects, choose the project you just created.
- Choose Deploy to device.
- On the Target device page, choose your registered AWS DeepLens device.
- Choose Review.
- Review the policy and choose Deploy.
The project takes up to a few minutes to deploy to AWS DeepLens.
- On the IAM console, add permissions for AmazonS3FullAccess to
AWSDeepLensGreengrassGroupRole
.
You can use AWS DeepLens output streams to make sure that your project has successfully deployed into AWS DeepLens.
Changing the AWS DeepLens inference Lambda function
After you deploy the sample object detection project into AWS DeepLens, you need to change the inference (edge) Lambda function to upload image frames to Amazon S3. Complete the following steps:
- Create an S3 bucket for the images.
- Use the default settings when creating the bucket and make sure to choose the same Region in which you configured your AWS DeepLens device.
- On the Lambda console, choose the
deeplens-object-detection
function. - Remove the function code and replace it with the
deeplens_lambda
code from the GitHub repo. - Replace bucket_name with your bucket name.
This step changes the inference code to upload images every 30 seconds to Amazon S3 when a car is detected.
If car is detected, a frame is captured and saved to the S3 bucket by calling the function push_to_s3
. See the following code:
The function is defined as follows:
- Save the Lambda function and publish a new version of the code.
You can now go to your AWS DeepLens project and update the function on the device.
- On the AWS DeepLens console, choose your project.
- For Version, choose the latest version of your Lambda function.
- For Timeout, enter
300
.
- Choose Save.
- Under Projects, choose the project.
- Choose Deploy.
The project can take up to a few minutes to deploy.
Registering your license plate in DynamoDB
In this step, you populate your license plate number to an DynamoDB table. You can either do this step manually using aws-cli
from the GitHub repo or you can have a simple web application on Amazon S3 that uses Amazon API Gateway and Lambda to insert the item in the table. This post uses aws-cli
to insert the item. You can complete this step from your local computer, but you need to have AWS CLI installed and configured in your machine. For more information, see What Is the AWS Command Line Interface?
Complete the following steps:
- Create a DynamoDB table.
- For Table name, enter
CarInfo
. - For Partition key, enter
LicensePlate
.
- Choose Create.
You can also complete the preceding steps with AWS CLI by entering the following code from your terminal:
Then insert your own license plate number into the CarInfo
table. (Replace the LicensePlate
variable with your car license plate number.) See the following code:
Storing API credentials in Secrets Manager
In this step, you use Secrets Manager to store third-party API credentials to protect the secrets. Complete the following steps:
- On the IAM console, under Roles, choose Create role.
- For AWS Service, choose Lambda.
- Choose Next: Permissions.
- Attach the following managed policies:
-
- SecretsManagerReadWrite
- AmazonS3FullAccess
- AmazonRekognitionReadOnlyAccess
- AmazonDynamoDBReadOnlyAccess
- CloudWatchLogsFullAccess
You use this role later when you create the Lambda function.
- On the Secrets Manager console, choose Store a new secret.
- For Select secret type, select Other type of secrets.
- Under Secret key/value, enter the key
username
and value3rd party API user name
. - Enter the key
password
and value3rd party API password
. - Leave other fields at their default.
- Proceed to store the secrets.
- After you create the secret, record the secret ARN, which is passed later as an environment variable for the Lambda function.
Creating a new Lambda function
Create a new (cloud) Lambda function on the AWS side to call the Amazon Rekognition API to detect the license plate number and verify it with the DynamoDB table. If it finds a match, it calls the third-party API to open the garage door. Complete the following steps:
- On the Lambda console, create a new function called
License-Plate-Match-cloud
. - For Runtime, choose Python 3.7.
- Under Permission, choose Use an existing role.
- Choose the role you created.
- Remove the function code and replace it with the
License-Plate-Match-cloud
code on the GitHub repo.
Different license plates have different formats. You may need to adjust the PlateNumber
code. The following example code reads the plate number from id[1]
from the Amazon Rekognition response:
This code uses a confidence score, which is a number between 0 and 100 that indicates the probability that a given prediction is correct. For this post, you use “MinConfidence
” as 50 and detecting the objects and “confidence_threshold
” as 70, while detecting the text from an image to discard any false positives results. The optimum threshold depends on the application. In many cases, you get the best user experience by setting the minimum confidence values higher than the default value.
This post also adds a third-party module called myq.py
, which you need to port in lambda_function.py
as import myq
(see the following screenshot). This myq.py
file can vary depending on what garage opener you use. The module retrieves third-party API secrets from Secrets Manager and calls the respective company’s API to open or close the garage door. You can download the module code from the GitHub repo.
- Under Environment variables, enter the following keys and values:
-
- For the key
SECRET_NAME
, enter the value of the secret ARN that you created. - For the key
APP_ID
, enter the third-party garage door opener device app ID. - For the key
DEVICE_LIST_ENDPOINT
, enter the third-party garage door opener list endpoint. - For the key
DEVICE_SET_ENDPOINT
, enter the third-party garage door opener set endpoint. - For the key
BASE_URI
, enter the third-party garage door opener base URI. - For the key
BASE_ENDPOINT
, enter the third-party garage door opener base endpoint.
- For the key
- Change the Timeout parameter value to 15 minutes.
You also need to have a trigger set up for this function on top of the S3 bucket that you created. Whenever AWS DeepLens loads new images into the S3 bucket, it triggers the function.
- Create a new Lambda layer for the Python
requests
module, which is used in themyq.py
file to serve HTTP requests.
For more information, see AWS Lambda Layers.
This step is required because AWS is removing the vendored version of requests from Botocore. For more information, see Removing the vendored version of requests from Botocore.
- To create the
requests
Lambda layer, enter the following code:
After you create the layer, add the layer to the License-Plate-Match-cloud
function.
- On the Lambda console, choose your function.
- Choose Layers.
- Choose Add a Layer.
- From the drop-down menu, choose the
requests
layer. - Choose Add.
- Choose Save.
Testing the system
You are now ready to test the system. The following video is an example of the system in action.
Conclusion
This post demonstrated how to use ML at the edge to solve a real-life problem. With the help of AWS DeepLens, you can detect a car object, capture its frame, and upload it to an S3 bucket. You can use Lambda to call Amazon Rekognition to read the license plate and check if it exists in a DynamoDB table, and make a third-party API call to open the garage door.
This project showcases the power of the AWS DeepLens device in introducing developers to ML and IoT. You can build a similar project in a short amount of time. Please share your experiences and any questions in the comments.
About the Authors
Amit Mukherjee is a Partner Solutions Architect with AWS. He provides architectural guidance to help partners achieve success in the cloud. He has special interest in AI and machine learning. In his spare time, he enjoys spending quality time with his family.
Georges Leschener is a Partner Solutions Architect in the Global System Integrator (GSI) team at Amazon Web Services. He works with GSI partners to help migrate customers’ workloads to AWS Cloud, and designs and architects innovative solutions on AWS by applying AWS best practices.
Tags: Archive
Leave a Reply