Enable javascript in your browser for better experience. Need to know to enable it? Go here.

Transfer learning for NLP

Last updated : May 19, 2020
NOT ON THE CURRENT EDITION
This blip is not on the current edition of the Radar. If it was on one of the last few editions, it is likely that it is still relevant. If the blip is older, it might no longer be relevant and our assessment might be different today. Unfortunately, we simply don't have the bandwidth to continuously review blips from previous editions of the Radar. Understand more
May 2020
Trial ?

We had this technique in Assess previously. The innovations in the NLP landscape continue at a great pace, and we're able to leverage these innovations in our projects thanks to the ubiquitous transfer learning for NLP. The GLUE benchmark (a suite of language understanding tasks) scores have seen dramatic progress over the past couple of years with average scores moving from 70.0 at launch to some of the leaders crossing 90.0 as of April 2020. A lot of our projects in the NLP domain are able to make significant progress by starting from pretrained models from ELMo, BERT, and ERNIE, among others, and then fine-tuning them based on the project needs.

Apr 2019
Assess ?

Transfer learning has been quite effective within the field of computer vision, speeding the time to train a model by reusing existing models. Those of us who work in machine learning are excited that the same techniques can be applied to natural language processing (NLP) with the publication of ULMFiT and open source pretrained models and code examples. We think transfer learning for NLP will significantly reduce the effort to create systems dealing with text classification.

Published : Apr 24, 2019

Download the PDF

 

 

 

English | Español | Português | 中文

Sign up for the Technology Radar newsletter

 

Subscribe now

Visit our archive to read previous volumes