Overcoming leakage on error-corrected quantum processors

Favorite Posted by Kevin Miao and Matt McEwen, Research Scientists, Quantum AI Team The qubits that make up Google quantum devices are delicate and noisy, so it’s necessary to incorporate error correction procedures that identify and account for qubit errors on the way to building a useful quantum computer. Two

Read More
Shared by Google AI Technology November 9, 2023

Boosting sustainable solutions from Sweden

Favorite Today, we’re announcing the Swedish recipients of Google.org Impact Challenge: Tech for Social Good – receiving technical support and 3 million Euros in funding for char… View Original Source (blog.google/technology/ai/) Here.

Alternating updates for efficient transformers

Favorite Posted by Xin Wang, Software Engineer, and Nishanth Dikkala, Research Scientist, Google Research Contemporary deep learning models have been remarkably successful in many domains, ranging from natural language to computer vision. Transformer neural networks (transformers) are a popular deep learning architecture that today comprise the foundation for most tasks

Read More
Shared by Google AI Technology November 7, 2023

Zero-shot adaptive prompting of large language models

Favorite Posted by Xingchen Wan, Student Researcher, and Ruoxi Sun, Research Scientist, Cloud AI Team Recent advances in large language models (LLMs) are very promising as reflected in their capability for general problem-solving in few-shot and zero-shot setups, even without explicit training on these tasks. This is impressive because in

Read More
Shared by Google AI Technology November 2, 2023

Supporting benchmarks for AI safety with MLCommons

Favorite Posted by Anoop Sinha, Technology and Society, and Marian Croak, Google Research, Responsible AI and Human Centered Technology team Standard benchmarks are agreed upon ways of measuring important product qualities, and they exist in many fields. Some standard benchmarks measure safety: for example, when a car manufacturer touts a

Read More
Shared by Google AI Technology October 26, 2023

Spoken question answering and speech continuation using a spectrogram-powered LLM

Favorite Posted by Eliya Nachmani, Research Scientist, and Alon Levkovitch, Student Researcher, Google Research The goal of natural language processing (NLP) is to develop computational models that can understand and generate natural language. By capturing the statistical patterns and structures of text-based natural language, language models can predict and generate

Read More
Shared by Google AI Technology October 26, 2023