Large Language Model (LLM) Talk

Linear Transformers


Listen Later

Linear Transformers address the computational limitations of standard Transformer models, which have a quadratic complexity, O(n^2), with respect to input sequence length. Linear Transformers aim for linear complexity, O(n), making them suitable for longer sequences. They achieve this through methods such as low-rank approximations, local attention, or kernelized attention. Examples include Linformer (low-rank matrices), Longformer (sliding window attention), and Performer (kernelized attention). Efficient attention, a type of linear attention, interprets keys as template attention maps and aggregates values into global context vectors, thus differing from dot-product attention which synthesizes pixel-wise attention maps. This approach allows more efficient resource usage in domains with large inputs or tight constraints.

...more
View all episodesView all episodes
Download on the App Store

Large Language Model (LLM) TalkBy AI-Talk

  • 4
  • 4
  • 4
  • 4
  • 4

4

4 ratings


More shows like Large Language Model (LLM) Talk

View all
Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

303 Listeners

NVIDIA AI Podcast by NVIDIA

NVIDIA AI Podcast

341 Listeners

The Daily by The New York Times

The Daily

112,584 Listeners

Learning English from the News by BBC Radio

Learning English from the News

264 Listeners

Thinking in English by Thomas Wilkinson

Thinking in English

110 Listeners

AI Agents: Top Trend of 2025 - by AIAgentStore.ai by AIAgentStore.ai

AI Agents: Top Trend of 2025 - by AIAgentStore.ai

3 Listeners