Large Language Model (LLM) Talk

LLM Training


Listen Later

Training large language models (LLMs) is challenging due to the large amount of GPU memory and long training times required. Several parallelism paradigms enable model training across multiple GPUs, and various model architecture and memory-saving designs make it possible to train very large neural networks. The optimal model size and number of training tokens should be scaled equally, with a doubling of model size requiring a doubling of training tokens. Current large language models are significantly under-trained. Techniques such as data parallelism, model parallelism, pipeline parallelism, and tensor parallelism can be used to distribute the training workload. Other strategies include CPU offloading, activation recomputation, mixed-precision training, and compression to save memory.

...more
View all episodesView all episodes
Download on the App Store

Large Language Model (LLM) TalkBy AI-Talk

  • 4
  • 4
  • 4
  • 4
  • 4

4

4 ratings


More shows like Large Language Model (LLM) Talk

View all
Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

303 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

341 Listeners

The Daily by The New York Times

The Daily

112,539 Listeners

Learning English from the News by BBC Radio

Learning English from the News

266 Listeners

Thinking in English by Thomas Wilkinson

Thinking in English

111 Listeners

AI Agents: Top Trend of 2025 - by AIAgentStore.ai by AIAgentStore.ai

AI Agents: Top Trend of 2025 - by AIAgentStore.ai

3 Listeners