Scaling multimodal understanding to long videos

Favorite Posted by Isaac Noble, Software Engineer, Google Research, and Anelia Angelova, Research Scientist, Google DeepMind When building machine learning models for real-life applications, we need to consider inputs from multiple modalities in order to capture various aspects of the world around us. For example, audio, video, and text all

Read More
Shared by Google AI Technology November 14, 2023

An opportunity agenda for AI

Favorite Today we’re sharing an AI Opportunity Agenda to provide concrete policy recommendations to help AI benefit as many people as possible. View Original Source (blog.google/technology/ai/) Here.

Enabling large-scale health studies for the research community

Favorite Posted by Chintan Ghate, Software Engineer, and Diana Mincu, Research Engineer, Google Research As consumer technologies like fitness trackers and mobile phones become more widely used for health-related data collection, so does the opportunity to leverage these data pathways to study and advance our understanding of medical conditions. We

Read More
Shared by Google AI Technology November 10, 2023

UNDP KM strategy

Favorite Linked below is an excellent video on the 2022 KM strategy from the UN Development program.  Good to see the focus on culture and networks. View Original Source (nickmilton.com) Here.

Alternating updates for efficient transformers

Favorite Posted by Xin Wang, Software Engineer, and Nishanth Dikkala, Research Scientist, Google Research Contemporary deep learning models have been remarkably successful in many domains, ranging from natural language to computer vision. Transformer neural networks (transformers) are a popular deep learning architecture that today comprise the foundation for most tasks

Read More
Shared by Google AI Technology November 7, 2023

Zero-shot adaptive prompting of large language models

Favorite Posted by Xingchen Wan, Student Researcher, and Ruoxi Sun, Research Scientist, Cloud AI Team Recent advances in large language models (LLMs) are very promising as reflected in their capability for general problem-solving in few-shot and zero-shot setups, even without explicit training on these tasks. This is impressive because in

Read More
Shared by Google AI Technology November 2, 2023