HuggingFace 每日AI论文速递

2025.01.29 | RL泛化优,SFT稳定输出;FP4量化降成本,精度保持。


Listen Later

本期的 8 篇论文如下:

[00:26] 🧠 SFT Memorizes, RL Generalizes: A Comparative Study of Foundation Model Post-training(监督微调记忆,强化学习泛化:基础模型后训练的比较研究)

[01:07] ⚡ Optimizing Large Language Model Training Using FP4 Quantization(优化使用FP4量化的超大语言模型训练)

[01:47] 📚 Over-Tokenized Transformer: Vocabulary is Generally Worth Scaling(过度分词的Transformer:词汇量通常值得扩展)

[02:30] 🧠 Open Problems in Mechanistic Interpretability(机制解释性中的开放问题)

[03:14] 🌐 DiffSplat: Repurposing Image Diffusion Models for Scalable Gaussian Splat Generation(DiffSplat:利用图像扩散模型进行可扩展的3D高斯喷洒生成)

[03:58] 🔍 Low-Rank Adapters Meet Neural Architecture Search for LLM Compression(低秩适配器与神经架构搜索在大语言模型压缩中的应用)

[04:41] 🌐 IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding(IndicMMLU-Pro:在多任务语言理解上评估印度语言大型语言模型)

[05:27] 📚 Histoires Morales: A French Dataset for Assessing Moral Alignment(道德故事:评估道德一致性的法语数据集)

【关注我们】

您还可以在以下平台找到我们,获得播客内容以外更多信息

小红书: AI速递

...more
View all episodesView all episodes
Download on the App Store

HuggingFace 每日AI论文速递By duan