
Sign up to save your podcasts
Or


In this episode, we discover the fascinating world of Transformers. Imagine it's the early days of AI, with RNNs and LSTMs doing the heavy lifting, but struggling with long-range dependencies like forgetful grandparents. Enter the Transformer model—a revolutionary architecture introduced in 2017 by Google’s "Attention is All You Need" paper. Transformers handle long-range dependencies and process data in parallel, making them incredibly efficient. We'll break down their key components like self-attention, positional encoding, and multi-head attention, showing how they transformed the AI landscape. Tune in to discover why Transformers are the shiny new sports car of AI models.
Connect with Emily Laird on LinkedIn
By Emily Laird4.6
2020 ratings
In this episode, we discover the fascinating world of Transformers. Imagine it's the early days of AI, with RNNs and LSTMs doing the heavy lifting, but struggling with long-range dependencies like forgetful grandparents. Enter the Transformer model—a revolutionary architecture introduced in 2017 by Google’s "Attention is All You Need" paper. Transformers handle long-range dependencies and process data in parallel, making them incredibly efficient. We'll break down their key components like self-attention, positional encoding, and multi-head attention, showing how they transformed the AI landscape. Tune in to discover why Transformers are the shiny new sports car of AI models.
Connect with Emily Laird on LinkedIn

32,246 Listeners

536 Listeners

1,649 Listeners

56,944 Listeners

8,876 Listeners

175 Listeners

212 Listeners

27,584 Listeners

5,109 Listeners

10,254 Listeners

16,525 Listeners

1,788 Listeners

688 Listeners

112 Listeners

0 Listeners