
Sign up to save your podcasts
Or
本期的 5 篇论文如下:
[00:47] TOP1(🔥169) | 💡 Reflect, Retry, Reward: Self-Improving LLMs via Reinforcement Learning(反思、重试、奖励:通过强化学习实现LLM的自我提升)
[02:55] TOP2(🔥130) | 🧠 Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning(超越80/20法则:高熵少数Token驱动LLM推理的有效强化学习)
[05:06] TOP3(🔥115) | 🧠 ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models(ProRL:延长的强化学习拓展大型语言模型的推理边界)
[07:27] TOP4(🔥89) | 🧠 AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time(AlphaOne:测试时驱动大模型进行快慢思考的推理框架)
[09:46] TOP5(🔥75) | 🤖 SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics(SmolVLA:一种用于经济高效型机器人的视觉-语言-动作模型)
【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递
本期的 5 篇论文如下:
[00:47] TOP1(🔥169) | 💡 Reflect, Retry, Reward: Self-Improving LLMs via Reinforcement Learning(反思、重试、奖励:通过强化学习实现LLM的自我提升)
[02:55] TOP2(🔥130) | 🧠 Beyond the 80/20 Rule: High-Entropy Minority Tokens Drive Effective Reinforcement Learning for LLM Reasoning(超越80/20法则:高熵少数Token驱动LLM推理的有效强化学习)
[05:06] TOP3(🔥115) | 🧠 ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models(ProRL:延长的强化学习拓展大型语言模型的推理边界)
[07:27] TOP4(🔥89) | 🧠 AlphaOne: Reasoning Models Thinking Slow and Fast at Test Time(AlphaOne:测试时驱动大模型进行快慢思考的推理框架)
[09:46] TOP5(🔥75) | 🤖 SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics(SmolVLA:一种用于经济高效型机器人的视觉-语言-动作模型)
【关注我们】
您还可以在以下平台找到我们,获得播客内容以外更多信息
小红书: AI速递