Daily Paper Cast

MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus Expansion


Listen Later

🤗 Upvotes: 13 | cs.CL

Authors:

Xintong Hao, Ke Shen, Chenggang Li

Title:

MAGA: MAssive Genre-Audience Reformulation to Pretraining Corpus Expansion

Arxiv:

http://arxiv.org/abs/2502.04235v1

Abstract:

Despite the remarkable capabilities of large language models across various tasks, their continued scaling faces a critical challenge: the scarcity of high-quality pretraining data. While model architectures continue to evolve, the natural language data struggles to scale up. To tackle this bottleneck, we propose \textbf{MA}ssive \textbf{G}enre-\textbf{A}udience~(MAGA) reformulation method, which systematic synthesizes diverse, contextually-rich pretraining data from existing corpus. This work makes three main contributions: (1) We propose MAGA reformulation method, a lightweight and scalable approach for pretraining corpus expansion, and build a 770B tokens MAGACorpus. (2) We evaluate MAGACorpus with different data budget scaling strategies, demonstrating consistent improvements across various model sizes (134M-13B), establishing the necessity for next-generation large-scale synthetic pretraining language models. (3) Through comprehensive analysis, we investigate prompt engineering's impact on synthetic training collapse and reveal limitations in conventional collapse detection metrics using validation losses. Our work shows that MAGA can substantially expand training datasets while maintaining quality, offering a reliably pathway for scaling models beyond data limitations.

...more
View all episodesView all episodes
Download on the App Store

Daily Paper CastBy Jingwen Liang, Gengyu Wang