Build reusable, serverless inference functions for your Amazon SageMaker models using AWS Lambda layers and containers

Favorite In AWS, you can host a trained model multiple ways, such as via Amazon SageMaker deployment, deploying to an Amazon Elastic Compute Cloud (Amazon EC2) instance (running a Flask + NGINX, for example), AWS Fargate, Amazon Elastic Kubernetes Service (Amazon EKS), or AWS Lambda. SageMaker provides convenient model hosting
Read More Shared by AWS Machine Learning June 1, 2021