Distilling step-by-step: Outperforming larger language models with less training data and smaller model sizes
Favorite Posted by Cheng-Yu Hsieh, Student Researcher, and Chen-Yu Lee, Research Scientist, Cloud AI Team Large language models (LLMs) have enabled a new data-efficient learning paradigm wherein they can be used to solve unseen new tasks via zero-shot or few-shot prompting. However, LLMs are challenging to deploy for real-world applications
Read More
Shared by Google AI Technology September 21, 2023