AI: post transformers

DeepSeek-V3: A Technical Report


Listen Later

This paper introduces DeepSeek-V3, a large Mixture-of-Experts (MoE) model designed to advance open-source language model capabilities with improved training efficiency and performance. The document details its innovative architecture, including an auxiliary-loss-free load balancing strategy and a Multi-Token Prediction objective for enhanced data efficiency and future token prediction. It further explains the infrastructures and optimizations that enable its cost-effective training, such as efficient communication protocols and a low-precision training framework using FP8. Finally, the paper outlines DeepSeek-V3's pre-training and post-training processes, including its long context extension capabilities and knowledge distillation techniques from the DeepSeek-R1 series, along with comprehensive evaluations across various benchmarks demonstrating its strong performance, especially in coding and mathematics.


Source: https://arxiv.org/pdf/2412.19437

...more
View all episodesView all episodes
Download on the App Store

AI: post transformersBy mcgrof