Zero-shot adaptive prompting of large language models

Favorite Posted by Xingchen Wan, Student Researcher, and Ruoxi Sun, Research Scientist, Cloud AI Team Recent advances in large language models (LLMs) are very promising as reflected in their capability for general problem-solving in few-shot and zero-shot setups, even without explicit training on these tasks. This is impressive because in

Read More
Shared by Google AI Technology November 2, 2023

Lesson Learning in NATO (video)

Favorite  Here’s a great video from NATO about their lesson learned capability Source here View Original Source (nickmilton.com) Here.

Supporting benchmarks for AI safety with MLCommons

Favorite Posted by Anoop Sinha, Technology and Society, and Marian Croak, Google Research, Responsible AI and Human Centered Technology team Standard benchmarks are agreed upon ways of measuring important product qualities, and they exist in many fields. Some standard benchmarks measure safety: for example, when a car manufacturer touts a

Read More
Shared by Google AI Technology October 26, 2023

Looking back at wildfire research in 2023

Favorite Posted by Yi-Fan Chen, Software Engineer, and Carla Bromberg, Program Lead, Google Research Wildfires are becoming larger and affecting more and more communities around the world, often resulting in large-scale devastation. Just this year, communities have experienced catastrophic wildfires in Greece, Maui, and Canada to name a few. While

Read More
Shared by Google AI Technology October 25, 2023

Nerdearla reflects on openness and inclusivity

Favorite Last month, OSI affiliate sysarmy organized the 10th edition of Nerdearla, one of the largest Open Source conferences in Latin America, bringing together a community of 10,000+ participants in Buenos Aires and 25,000+ online. Nerdearla is 100% free for attendees both online and in-person, relying solely on the companies

Read More
Shared by voicesofopensource October 25, 2023

Batch calibration: Rethinking calibration for in-context learning and prompt engineering

Favorite Posted by Han Zhou, Student Researcher, and Subhrajit Roy, Senior Research Scientist, Google Research Prompting large language models (LLMs) has become an efficient learning paradigm for adapting LLMs to a new task by conditioning on human-designed instructions. The remarkable in-context learning (ICL) ability of LLMs also leads to efficient

Read More
Shared by Google AI Technology October 13, 2023