A decoder-only foundation model for time-series forecasting

Favorite Posted by Rajat Sen and Yichen Zhou, Google Research Time-series forecasting is ubiquitous in various domains, such as retail, finance, manufacturing, healthcare and natural sciences. In retail use cases, for example, it has been observed that improving demand forecasting accuracy can meaningfully reduce inventory costs and increase revenue. Deep

Read More
Shared by Google AI Technology February 2, 2024

MobileDiffusion: Rapid text-to-image generation on-device

Favorite Posted by Yang Zhao, Senior Software Engineer, and Tingbo Hou, Senior Staff Software Engineer, Core ML Text-to-image diffusion models have shown exceptional capabilities in generating high-quality images from text prompts. However, leading models feature billions of parameters and are consequently expensive to run, requiring powerful desktops or servers (e.g.,

Read More
Shared by Google AI Technology January 31, 2024

Advancements in machine learning for machine learning

Favorite Posted by Phitchaya Mangpo Phothilimthana, Staff Research Scientist, Google DeepMind, and Bryan Perozzi, Senior Staff Research Scientist, Google Research With the recent and accelerated advances in machine learning (ML), machines can understand natural language, engage in conversations, draw images, create videos and more. Modern ML models are programmed and

Read More
Shared by Google AI Technology December 15, 2023

Alternating updates for efficient transformers

Favorite Posted by Xin Wang, Software Engineer, and Nishanth Dikkala, Research Scientist, Google Research Contemporary deep learning models have been remarkably successful in many domains, ranging from natural language to computer vision. Transformer neural networks (transformers) are a popular deep learning architecture that today comprise the foundation for most tasks

Read More
Shared by Google AI Technology November 7, 2023

Zero-shot adaptive prompting of large language models

Favorite Posted by Xingchen Wan, Student Researcher, and Ruoxi Sun, Research Scientist, Cloud AI Team Recent advances in large language models (LLMs) are very promising as reflected in their capability for general problem-solving in few-shot and zero-shot setups, even without explicit training on these tasks. This is impressive because in

Read More
Shared by Google AI Technology November 2, 2023

Looking back at wildfire research in 2023

Favorite Posted by Yi-Fan Chen, Software Engineer, and Carla Bromberg, Program Lead, Google Research Wildfires are becoming larger and affecting more and more communities around the world, often resulting in large-scale devastation. Just this year, communities have experienced catastrophic wildfires in Greece, Maui, and Canada to name a few. While

Read More
Shared by Google AI Technology October 25, 2023