Latest from Google AI – Evaluating Syntactic Abilities of Language Models
Posted by Jason Wei, AI Resident and Dan Garrette, Research Scientist, Google Research In recent years, pre-trained language models, such as BERT and GPT-3, have seen widespread use in natural language processing (NLP). By training on large volumes of text, language models acquire broad knowledge about the world, achieving strong performance on various NLP benchmarks….