
Sign up to save your podcasts
Or
本期的 8 篇论文如下:
[00:28] 💡 Vision-Language-Vision Auto-Encoder: Scalable Knowledge Distillation from Diffusion Models(视觉-语言-视觉自编码器:从扩散模型中进行可扩展的知识蒸馏)
[01:27] 🤖 EXAONE 4.0: Unified Large Language Models Integrating Non-reasoning and Reasoning Modes(EXAONE 4.0:融合非推理与推理模式的统一大型语言模型)
[02:24] ⚖ Scaling Laws for Optimal Data Mixtures(最优数据混合的缩放定律)
[03:12] 🔬 Can Multimodal Foundation Models Understand Schematic Diagrams? An Empirical Study on Information-Seeking QA over Scientific Papers(多模态基础模型能理解示意图吗?基于科学论文的信息检索问答实证研究)
[03:58] 🤝 AgentsNet: Coordination and Collaborative Reasoning in Multi-Agent LLMs(AgentsNet: 多智能体LLM中的协同与合作推理)
[04:50] 🦠 LLMalMorph: On The Feasibility of Generating Variant Malware using Large-Language-Models(LLM变种重塑:基于大型语言模型生成恶意软件变体的可行性研究)
[05:38] 🤖 OpenCodeReasoning-II: A Simple Test Time Scaling Approach via Self-Critique(OpenCodeReasoning-II:一种基于自我评价的简单测试时缩放方法)
[06:25] 🧠 Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs(根植于预训练,受微调影响:LLM中认知偏差的起源案例研究)
【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递
本期的 8 篇论文如下:
[00:28] 💡 Vision-Language-Vision Auto-Encoder: Scalable Knowledge Distillation from Diffusion Models(视觉-语言-视觉自编码器:从扩散模型中进行可扩展的知识蒸馏)
[01:27] 🤖 EXAONE 4.0: Unified Large Language Models Integrating Non-reasoning and Reasoning Modes(EXAONE 4.0:融合非推理与推理模式的统一大型语言模型)
[02:24] ⚖ Scaling Laws for Optimal Data Mixtures(最优数据混合的缩放定律)
[03:12] 🔬 Can Multimodal Foundation Models Understand Schematic Diagrams? An Empirical Study on Information-Seeking QA over Scientific Papers(多模态基础模型能理解示意图吗?基于科学论文的信息检索问答实证研究)
[03:58] 🤝 AgentsNet: Coordination and Collaborative Reasoning in Multi-Agent LLMs(AgentsNet: 多智能体LLM中的协同与合作推理)
[04:50] 🦠 LLMalMorph: On The Feasibility of Generating Variant Malware using Large-Language-Models(LLM变种重塑:基于大型语言模型生成恶意软件变体的可行性研究)
[05:38] 🤖 OpenCodeReasoning-II: A Simple Test Time Scaling Approach via Self-Critique(OpenCodeReasoning-II:一种基于自我评价的简单测试时缩放方法)
[06:25] 🧠 Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs(根植于预训练,受微调影响:LLM中认知偏差的起源案例研究)
【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递