
Sign up to save your podcasts
Or


In this episode, we discover the fascinating world of Transformers. Imagine it's the early days of AI, with RNNs and LSTMs doing the heavy lifting, but struggling with long-range dependencies like forgetful grandparents. Enter the Transformer model—a revolutionary architecture introduced in 2017 by Google’s "Attention is All You Need" paper. Transformers handle long-range dependencies and process data in parallel, making them incredibly efficient. We'll break down their key components like self-attention, positional encoding, and multi-head attention, showing how they transformed the AI landscape. Tune in to discover why Transformers are the shiny new sports car of AI models.
Connect with Emily Laird on LinkedIn
By Emily Laird4.6
1919 ratings
In this episode, we discover the fascinating world of Transformers. Imagine it's the early days of AI, with RNNs and LSTMs doing the heavy lifting, but struggling with long-range dependencies like forgetful grandparents. Enter the Transformer model—a revolutionary architecture introduced in 2017 by Google’s "Attention is All You Need" paper. Transformers handle long-range dependencies and process data in parallel, making them incredibly efficient. We'll break down their key components like self-attention, positional encoding, and multi-head attention, showing how they transformed the AI landscape. Tune in to discover why Transformers are the shiny new sports car of AI models.
Connect with Emily Laird on LinkedIn

334 Listeners

152 Listeners

208 Listeners

197 Listeners

154 Listeners

227 Listeners

608 Listeners

274 Listeners

107 Listeners

54 Listeners

173 Listeners

55 Listeners

146 Listeners

62 Listeners

24 Listeners