Looking back at wildfire research in 2023

Favorite Posted by Yi-Fan Chen, Software Engineer, and Carla Bromberg, Program Lead, Google Research Wildfires are becoming larger and affecting more and more communities around the world, often resulting in large-scale devastation. Just this year, communities have experienced catastrophic wildfires in Greece, Maui, and Canada to name a few. While

Read More
Shared by Google AI Technology October 25, 2023

Batch calibration: Rethinking calibration for in-context learning and prompt engineering

Favorite Posted by Han Zhou, Student Researcher, and Subhrajit Roy, Senior Research Scientist, Google Research Prompting large language models (LLMs) has become an efficient learning paradigm for adapting LLMs to a new task by conditioning on human-designed instructions. The remarkable in-context learning (ICL) ability of LLMs also leads to efficient

Read More
Shared by Google AI Technology October 13, 2023

Re-weighted gradient descent via distributionally robust optimization

Favorite Ramnath Kumar, Pre-Doctoral Researcher, and Arun Sai Suggala, Research Scientist, Google Research Deep neural networks (DNNs) have become essential for solving a wide range of tasks, from standard supervised learning (image classification using ViT) to meta-learning. The most commonly-used paradigm for learning DNNs is empirical risk minimization (ERM), which

Read More
Shared by Google AI Technology September 28, 2023

Google Research embarks on effort to map a mouse brain

Favorite Posted by Michał Januszewski, Research Scientist, Google Research The human brain is perhaps the most computationally complex machine in existence, consisting of networks of billions of cells. Researchers currently don’t understand the full picture of how glitches in its network machinery contribute to mental illnesses and other diseases, such

Read More
Shared by Google AI Technology September 26, 2023

Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes

Favorite Posted by Cheng-Yu Hsieh, Student Researcher, and Chen-Yu Lee, Research Scientist, Cloud AI Team Large language models (LLMs) have enabled a new data-efficient learning paradigm wherein they can be used to solve unseen new tasks via zero-shot or few-shot prompting. However, LLMs are challenging to deploy for real-world applications

Read More
Shared by Google AI Technology September 21, 2023

To trust AI, it must be open and transparent. Period.

Favorite [SPONSOR OPINION] By Heather Meeker, OSS Capital Machine learning has been around for a long time. But in late 2022, recent advancements in deep learning and large language models started to change the game and come into the public eye. And people started thinking, “We love Open Source software,

Read More
Shared by voicesofopensource September 14, 2023