Papers Read on AI

NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework


Listen Later

Pretrained language models have become the standard approach for many NLP tasks due to strong performance, but they are very expensive to train. We propose a simple and efficient learning framework TLM that does not rely on large-scale pretraining1. Given some labeled task data and a large general corpus, TLM uses task data as queries to retrieve a tiny subset of the general corpus and jointly optimizes the task objective and the language modeling objective from scratch.
2021: Xingcheng Yao, Yanan Zheng, Xiaocong Yang, Zhilin Yang
https://arxiv.org/pdf/2111.04130v1.pdf
...more
View all episodesView all episodes
Download on the App Store

Papers Read on AIBy Rob

  • 3.7
  • 3.7
  • 3.7
  • 3.7
  • 3.7

3.7

3 ratings