Enable javascript in your browser for better experience. Need to know to enable it? Go here.
Published : May 19, 2020
Not on the current edition
This blip is not on the current edition of the Radar. If it was on one of the last few editions it is likely that it is still relevant. If the blip is older it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar Understand more
May 2020
Assess ?

In the previous edition of the Radar we had BERT — which is a key milestone in the NLP landscape. Last year, Baidu released ERNIE 2.0 (Enhanced Representation through kNowledge IntEgration) which outperformed BERT on seven GLUE language understanding tasks and on all nine of the Chinese NLP tasks. ERNIE, like BERT, provides unsupervised pretrained language models, which can be fine-tuned by adding output layers to create state-of-the-art models for a variety of NLP tasks. ERNIE differs from traditional pretraining methods in that it is a continual pretraining framework. Instead of training with a small number of pretraining objectives, it could constantly introduce a large variety of pretraining tasks to help the model efficiently learn language representations. We're pretty excited about the advancements in NLP and are looking forward to experimenting with ERNIE on our projects.

Download the PDF

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes