Optimizing costs in Amazon Elastic Inference with TensorFlow

Favorite Amazon Elastic Inference allows you to attach low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances, and reduce the cost of running deep learning inference by up to 75 percent. The EIPredictorAPI makes it easy to use Elastic Inference. In this post, we use the EIPredictor and describe a step-by-step example for using

Read More
Shared by AWS Machine Learning July 11, 2019

Build a custom vocabulary to enhance speech-to-text transcription accuracy with Amazon Transcribe

Favorite Amazon Transcribe is a fully-managed automatic speech recognition (ASR) service that makes it easy for developers to add speech-to-text capabilities to applications. Depending on your use case, you may have domain-specific terminology that doesn’t transcribe properly (e.g. “EBITDA” or “myocardial infarction”). In this post, we will show you how

Read More
Shared by AWS Machine Learning July 3, 2019

Deploying PyTorch inference with MXNet Model Server

Favorite Training and inference are crucial components of a machine learning (ML) development cycle. During the training phase, you teach a model to address a specific problem. Through this process, you obtain binary model files ready for use in production. For inference, you can choose among several framework-specific solutions for

Read More
Shared by AWS Machine Learning July 2, 2019

Support for Apache MXNet 1.4 and Model Server in Amazon SageMaker

Favorite Apache MXNet is an open-source deep learning software framework used to train and deploy deep neural networks. Data scientists and machine learning (ML) developers love MXNet due to its flexibility and efficiency when building deep learning models. Amazon SageMaker is committed to improving the customer experience for all ML frameworks and libraries, including

Read More
Shared by AWS Machine Learning July 1, 2019