
Sign up to save your podcasts
Or


In early 2024 Andrej Karpathy stood up an llm.c repo to train GPT-2 (124M), which took an equivalent of 45 minutes on 8xH100 GPUs to reach 3.28 cross entropy loss. By Jan 2025, collaborators of modded-nanogpt brought that time down to 3 minutes. It sat near 3 minutes until July 2025, having a large swath of optimization already applied: RoPE, value embeddings, reduce scatter grad updates, Muon, QK Norm, Relu^2, a custom FP8 head, skip connections, flex attention, short-long windows, attention window warmup, linear lr cooldown, and more. Yet, in the last 3 months the record has fallen by another 20% to 2 minutes and 20 seconds.
Many of the improvements in the last 20% have not yet been published outside of the modded-nanogpt repo. This post summarizes those improvements. Not everything will generalize to larger scales, but there are some core concepts that I believe are promising. Improvements [...]
---
Outline:
(02:02) ML Improvements
(02:06) #1: Document Alignment
(03:26) #2: Dynamic Attention Window Management by Layer
(04:50) #3 Heterogenous Batch Sizes
(05:31) #4 Backout: Enabling a model to back out context for predictions
(06:18) #5 Polar Express
(06:36) #6 Smear Module
(07:20) #7 Sparse Attention Gate
(07:58) #8 More Bfloat16
(08:36) #9 Softmax Skip Gate
(09:04) #10 Drop MLP Layer
(09:19) Engineering Improvements
(09:23) #1 Flash Attention 3
(09:50) #2 Parameter reshaping for shared reduce scatter
(11:15) #3 Async Data Fetch and Index
(11:44) #4 Vectorized Optimizer Step
(12:03) #5 Triton Kernel for Symmetric Matmul
(12:41) #6 Resize Lambda Parameters
(13:36) Takeaways from the Journey
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongIn early 2024 Andrej Karpathy stood up an llm.c repo to train GPT-2 (124M), which took an equivalent of 45 minutes on 8xH100 GPUs to reach 3.28 cross entropy loss. By Jan 2025, collaborators of modded-nanogpt brought that time down to 3 minutes. It sat near 3 minutes until July 2025, having a large swath of optimization already applied: RoPE, value embeddings, reduce scatter grad updates, Muon, QK Norm, Relu^2, a custom FP8 head, skip connections, flex attention, short-long windows, attention window warmup, linear lr cooldown, and more. Yet, in the last 3 months the record has fallen by another 20% to 2 minutes and 20 seconds.
Many of the improvements in the last 20% have not yet been published outside of the modded-nanogpt repo. This post summarizes those improvements. Not everything will generalize to larger scales, but there are some core concepts that I believe are promising. Improvements [...]
---
Outline:
(02:02) ML Improvements
(02:06) #1: Document Alignment
(03:26) #2: Dynamic Attention Window Management by Layer
(04:50) #3 Heterogenous Batch Sizes
(05:31) #4 Backout: Enabling a model to back out context for predictions
(06:18) #5 Polar Express
(06:36) #6 Smear Module
(07:20) #7 Sparse Attention Gate
(07:58) #8 More Bfloat16
(08:36) #9 Softmax Skip Gate
(09:04) #10 Drop MLP Layer
(09:19) Engineering Improvements
(09:23) #1 Flash Attention 3
(09:50) #2 Parameter reshaping for shared reduce scatter
(11:15) #3 Async Data Fetch and Index
(11:44) #4 Vectorized Optimizer Step
(12:03) #5 Triton Kernel for Symmetric Matmul
(12:41) #6 Resize Lambda Parameters
(13:36) Takeaways from the Journey
The original text contained 4 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,376 Listeners

2,429 Listeners

8,187 Listeners

4,155 Listeners

92 Listeners

1,553 Listeners

9,799 Listeners

89 Listeners

488 Listeners

5,472 Listeners

16,144 Listeners

531 Listeners

131 Listeners

96 Listeners

510 Listeners