Re-weighted gradient descent via distributionally robust optimization

Favorite Ramnath Kumar, Pre-Doctoral Researcher, and Arun Sai Suggala, Research Scientist, Google Research Deep neural networks (DNNs) have become essential for solving a wide range of tasks, from standard supervised learning (image classification using ViT) to meta-learning. The most commonly-used paradigm for learning DNNs is empirical risk minimization (ERM), which

Read More
Shared by Google AI Technology September 28, 2023

Google Research embarks on effort to map a mouse brain

Favorite Posted by Michał Januszewski, Research Scientist, Google Research The human brain is perhaps the most computationally complex machine in existence, consisting of networks of billions of cells. Researchers currently don’t understand the full picture of how glitches in its network machinery contribute to mental illnesses and other diseases, such

Read More
Shared by Google AI Technology September 26, 2023

Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes

Favorite Posted by Cheng-Yu Hsieh, Student Researcher, and Chen-Yu Lee, Research Scientist, Cloud AI Team Large language models (LLMs) have enabled a new data-efficient learning paradigm wherein they can be used to solve unseen new tasks via zero-shot or few-shot prompting. However, LLMs are challenging to deploy for real-world applications

Read More
Shared by Google AI Technology September 21, 2023

To trust AI, it must be open and transparent. Period.

Favorite [SPONSOR OPINION] By Heather Meeker, OSS Capital Machine learning has been around for a long time. But in late 2022, recent advancements in deep learning and large language models started to change the game and come into the public eye. And people started thinking, “We love Open Source software,

Read More
Shared by voicesofopensource September 14, 2023