WAP: Weekly AI Papers

DeepSeek V3


Listen Later

DeepSeek-V3, a 671B-parameter Mixture-of-Experts large language model. It covers the model's architecture, including Multi-Head Latent Attention and an innovative auxiliary-loss-free load balancing strategy for DeepSeekMoE. The training process, encompassing pre-training on 14.8 trillion tokens and post-training using supervised fine-tuning and reinforcement learning, is described.


paper: https://github.com/deepseek-ai/DeepSeek-V3/blob/main/DeepSeek_V3.pdf



...more
View all episodesView all episodes
Download on the App Store

WAP: Weekly AI PapersBy Ankit Sharma