MLGO: A Machine Learning Framework for Compiler Optimization

Favorite Posted by Yundi Qian, Software Engineer, Google Research and Mircea Trofin, Software Engineer, Google Core The question of how to compile faster and smaller code arose together with the birth of modem computers. Better code optimization can significantly reduce the operational cost of large datacenter applications. The size of

Read More
Shared by Google AI Technology July 6, 2022

Why facilitation is crucial to the success of KM processes

Favorite As long as Knowledge Management involves face-to-face or virtual interactions between teams and individuals, Facilitation has a key role to play. Good facilitation is essential to effective face-to-face KM processes.  Effectively identifying and exchanging knowledge in a meeting requires high quality interactions between people. These interactions need to be

Read More
Shared by Nick Milton July 4, 2022

Identifying Disfluencies in Natural Speech

Favorite Posted by Dan Walker and Dan Liebling, Software Engineers, Google Research People don’t write in the same way that they speak. Written language is controlled and deliberate, whereas transcripts of spontaneous speech (like interviews) are hard to read because speech is disorganized and less fluent. One aspect that makes

Read More
Shared by Google AI Technology June 30, 2022

Mahima Pushkarna is making data easier to understand

Favorite Five years ago, information designer Mahima Pushkarna joined Google to make data easier to understand. As a senior interaction designer on the People + AI Research (PAIR) team, she designed Data Cards to help everyone better understand the contexts of the data they are using. The Data Cards Playbook

Read More
Shared by Google AI Technology June 30, 2022

Hyperparameter optimization for fine-tuning pre-trained transformer models from Hugging Face

Favorite Large attention-based transformer models have obtained massive gains on natural language processing (NLP). However, training these gigantic networks from scratch requires a tremendous amount of data and compute. For smaller NLP datasets, a simple yet effective strategy is to use a pre-trained transformer, usually trained in an unsupervised fashion

Read More
Shared by AWS Machine Learning June 29, 2022