Deploy multiple machine learning models for inference on AWS Lambda and Amazon EFS
Favorite You can deploy machine learning (ML) models for real-time inference with large libraries or pre-trained models. Common use cases include sentiment analysis, image classification, and search applications. These ML jobs typically vary in duration and require instant scaling to meet peak demand. You want to process latency-sensitive inference requests
Read More
Shared by AWS Machine Learning September 29, 2021