Deploy multiple serving containers on a single instance using Amazon SageMaker multi-container endpoints

Favorite Amazon SageMaker is a fully managed service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning (ML) models built on different frameworks. SageMaker real-time inference endpoints are fully managed and can serve predictions in real time with low latency. This post introduces

Read More
Shared by AWS Machine Learning August 17, 2021

How KM adds value by increasing the learning rate

Favorite What is your organisation’s learning rate?  What could KM increase this to? And what’s the value of the difference? Over time, with any product or process, the costs come down. The rate of decrease is known as the learning rate. One value proposition for Knowledge Management is to accelerate

Read More
Shared by Nick Milton August 17, 2021

Getting started with Amazon SageMaker Feature Store

Favorite In a machine learning (ML) journey, one crucial step before building any ML model is to transform your data and design features from your data so that your data can be machine-readable. This step is known as feature engineering. This can include one-hot encoding categorical variables, converting text values

Read More
Shared by AWS Machine Learning August 12, 2021

Run your TensorFlow job on Amazon SageMaker with a PyCharm IDE

Favorite As more machine learning (ML) workloads go into production, many organizations must bring ML workloads to market quickly and increase productivity in the ML model development lifecycle. However, the ML model development lifecycle is significantly different from an application development lifecycle. This is due in part to the amount

Read More
Shared by AWS Machine Learning August 11, 2021