Large Language Model (LLM) Talk

Mixture of Experts (MoE)


Listen Later

Mixture of Experts (MoE) models use multiple sub-models, or experts, to handle different parts of the input space, orchestrated by a router or gating mechanism. MoEs are trained by dividing data, specializing experts, and using a router to direct inputs. Not all parameters are activated for each input, using sparse activation, and techniques such as load balancing and expert capacity are used to improve training. MoE models can be built through upcycling or sparse splitting. While MoEs offer faster pretraining and inference, they also present training challenges such as imbalanced routing and high resource requirements, which can be mitigated using techniques such as regularization and specialized algorithms.

...more
View all episodesView all episodes
Download on the App Store

Large Language Model (LLM) TalkBy AI-Talk

  • 4
  • 4
  • 4
  • 4
  • 4

4

4 ratings


More shows like Large Language Model (LLM) Talk

View all
Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

303 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

341 Listeners

The Daily by The New York Times

The Daily

112,584 Listeners

Learning English from the News by BBC Radio

Learning English from the News

264 Listeners

Thinking in English by Thomas Wilkinson

Thinking in English

110 Listeners

AI Agents: Top Trend of 2025 - by AIAgentStore.ai by AIAgentStore.ai

AI Agents: Top Trend of 2025 - by AIAgentStore.ai

3 Listeners