Amazon Bedrock Flows is now generally available with enhanced safety and traceability

Favorite Today, we are excited to announce the general availability of Amazon Bedrock Flows (previously known as Prompt Flows). With Bedrock Flows, you can quickly build and execute complex generative AI workflows without writing code. Key benefits include: Simplified generative AI workflow development with an intuitive visual interface. Seamless integration

Read More
Shared by AWS Machine Learning November 23, 2024

Improve factual consistency with LLM Debates

Favorite In this post, we demonstrate the potential of large language model (LLM) debates using a supervised dataset with ground truth. In this LLM debate, we have two debater LLMs, each one taking one side of an argument and defending it based on the previous arguments for N(=3) rounds. The

Read More
Shared by AWS Machine Learning November 23, 2024

Amazon SageMaker Inference now supports G6e instances

Favorite As the demand for generative AI continues to grow, developers and enterprises seek more flexible, cost-effective, and powerful accelerators to meet their needs. Today, we are thrilled to announce the availability of G6e instances powered by NVIDIA’s L40S Tensor Core GPUs on Amazon SageMaker. You will have the option to

Read More
Shared by AWS Machine Learning November 23, 2024

Accelerating Mixtral MoE fine-tuning on Amazon SageMaker with QLoRA

Favorite Companies across various scales and industries are using large language models (LLMs) to develop generative AI applications that provide innovative experiences for customers and employees. However, building or fine-tuning these pre-trained LLMs on extensive datasets demands substantial computational resources and engineering effort. With the increase in sizes of these

Read More
Shared by AWS Machine Learning November 23, 2024