Large Language Model (LLM) Talk

Mixture-of-Recursions (MoR)


Listen Later

Mixture-of-Recursions (MoR) is a unified framework built on a Recursive Transformer architecture, designed to enhance the efficiency of large language models. It achieves this by combining three core paradigms: parameter sharing (reusing shared layers across recursion steps), adaptive computation (dynamically assigning different processing depths to individual tokens via lightweight routers), and efficient Key-Value (KV) caching (selectively storing or sharing KV pairs). This integrated approach enables MoR to deliver large-model quality with significantly reduced computational and memory overhead, improving efficiency for both training and inference.

...more
View all episodesView all episodes
Download on the App Store

Large Language Model (LLM) TalkBy AI-Talk

  • 4
  • 4
  • 4
  • 4
  • 4

4

4 ratings


More shows like Large Language Model (LLM) Talk

View all
Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

303 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

341 Listeners

The Daily by The New York Times

The Daily

112,539 Listeners

Learning English from the News by BBC Radio

Learning English from the News

266 Listeners

Thinking in English by Thomas Wilkinson

Thinking in English

111 Listeners

AI Agents: Top Trend of 2025 - by AIAgentStore.ai by AIAgentStore.ai

AI Agents: Top Trend of 2025 - by AIAgentStore.ai

3 Listeners