The May 24, 2025 UT Austin paper introduces the Anchored Diffusion Language Model (ADLM), a novel approach that aims to improve discrete language modeling by addressing the limitations of traditional Autoregressive (AR) and standard Diffusion Language Models (DLMs). AR models, such as GPT-3, generate text sequentially and struggle with complex reasoning, while existing DLMs, which use iterative masked-token prediction, lag behind AR models in quality. ADLM enhances DLMs by incorporating anchor tokens—semantically important words—that guide the denoising process through a two-component architecture: an anchor network and a denoising network. The training is formalized through the Anchored Negative Evidence Lower Bound (ANELBO) objective, which encourages the anchor network to predict these key tokens early, significantly reducing the sample complexity and improving both likelihood modeling (achieving better perplexity scores) and generated text quality. Furthermore, the concept of anchoring is successfully extended to AR models via Anchored Chain-of-Thought (ACoT) fine-tuning, demonstrating improved performance on math and logical reasoning tasks. Source: https://arxiv.org/pdf/2505.18456