AI: post transformers

NeurIPS 2025: FlashBias: Fast Computation of Attention with Bias


Listen Later

The source introduces FlashBias, an innovative algorithm designed to significantly accelerate the efficiency of the Transformer attention mechanism when incorporating an additive bias term. Current methods, like those optimized for attention masks, cannot handle bias because these terms are generally dense and continuous rather than sparse. FlashBias overcomes this limitation by exploiting the mathematical principle that attention bias matrices exhibit an inherent low-rank structure. The technique utilizes several decomposition methods, including exact, SVD, and neural decomposition, to represent the dense bias matrix in a much smaller, compressible form. Experiments showcase substantial time and memory savings when applying FlashBias across various demanding models, such as Large Language Models, Vision Transformers, and AlphaFold 3. This new approach provides crucial efficiency for training and inference, especially for tasks involving dynamic or complex prior knowledge.


Source:

https://openreview.net/pdf?id=7L4NvUtZY3

...more
View all episodesView all episodes
Download on the App Store

AI: post transformersBy mcgrof