Unlock personalized experiences powered by AI using Amazon Personalize and Amazon OpenSearch Service

Favorite OpenSearch is a scalable, flexible, and extensible open source software suite for search, analytics, security monitoring, and observability applications, licensed under the Apache 2.0 license. Amazon OpenSearch Service is a fully managed service that makes it straightforward to deploy, scale, and operate OpenSearch in the AWS Cloud. OpenSearch uses

Read More
Shared by AWS Machine Learning February 29, 2024

Build a robust text-to-SQL solution generating complex queries, self-correcting, and querying diverse data sources

Favorite Structured Query Language (SQL) is a complex language that requires an understanding of databases and metadata. Today, generative AI can enable people without SQL knowledge. This generative AI task is called text-to-SQL, which generates SQL queries from natural language processing (NLP) and converts text into semantically correct SQL. The

Read More
Shared by AWS Machine Learning February 28, 2024

Techniques and approaches for monitoring large language models on AWS

Favorite Large Language Models (LLMs) have revolutionized the field of natural language processing (NLP), improving tasks such as language translation, text summarization, and sentiment analysis. However, as these models continue to grow in size and complexity, monitoring their performance and behavior has become increasingly challenging. Monitoring the performance and behavior

Read More
Shared by AWS Machine Learning February 26, 2024

Streamline diarization using AI as an assistive technology: ZOO Digital’s story

Favorite ZOO Digital provides end-to-end localization and media services to adapt original TV and movie content to different languages, regions, and cultures. It makes globalization easier for the world’s best content creators. Trusted by the biggest names in entertainment, ZOO Digital delivers high-quality localization and media services at scale, including

Read More
Shared by AWS Machine Learning February 20, 2024

Run ML inference on unplanned and spiky traffic using Amazon SageMaker multi-model endpoints

Favorite Amazon SageMaker multi-model endpoints (MMEs) are a fully managed capability of SageMaker inference that allows you to deploy thousands of models on a single endpoint. Previously, MMEs pre-determinedly allocated CPU computing power to models statically regardless the model traffic load, using Multi Model Server (MMS) as its model server.

Read More
Shared by AWS Machine Learning February 19, 2024