Come Partner with Us

Improving traffic evacuations: A case study

Favorite Posted by Damien Pierce, Software Engineer, and John Anderson, Senior Research Director, Google Research Some cities or communities develop an evacuation plan to be used in case of an emergency. There are a number of reasons why city officials might enact their plan, a primary one being a natural

Read More
Shared by Google AI Technology October 16, 2023

Batch calibration: Rethinking calibration for in-context learning and prompt engineering

Favorite Posted by Han Zhou, Student Researcher, and Subhrajit Roy, Senior Research Scientist, Google Research Prompting large language models (LLMs) has become an efficient learning paradigm for adapting LLMs to a new task by conditioning on human-designed instructions. The remarkable in-context learning (ICL) ability of LLMs also leads to efficient

Read More
Shared by Google AI Technology October 13, 2023

Developing industrial use cases for physical simulation on future error-corrected quantum computers

Favorite Posted by Nicholas Rubin, Senior Research Scientist, and Ryan Babbush, Head of Quantum Algorithms, Quantum AI Team If you’ve paid attention to the quantum computing space, you’ve heard the claim that in the future, quantum computers will solve certain problems exponentially more efficiently than classical computers can. They have

Read More
Shared by Google AI Technology October 12, 2023

New ways to get inspired with generative AI in Search

Favorite We’re testing new ways to get started on something you need to do — like creating an image that can bring an idea to life, or a written draft when you need a starting po… View Original Source (blog.google/technology/ai/) Here.

Improve performance of Falcon models with Amazon SageMaker

Favorite What is the optimal framework and configuration for hosting large language models (LLMs) for text-generating generative AI applications? Despite the abundance of options for serving LLMs, this is a hard question to answer due to the size of the models, varying model architectures, performance requirements of applications, and more.

Read More
Shared by AWS Machine Learning October 11, 2023

Whisper models for automatic speech recognition now available in Amazon SageMaker JumpStart

Favorite Today, we’re excited to announce that the OpenAI Whisper foundation model is available for customers using Amazon SageMaker JumpStart. Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680 thousand hours of labelled data, Whisper models demonstrate a strong ability to generalize to many

Read More
Shared by AWS Machine Learning October 10, 2023