
Sign up to save your podcasts
Or
Hey guys, in this episode I explain most of what I know about Transformers. I talk about the architecture, the attention formula, encoder, decoder, self-supervised learning, positional encoding, tokenization, inductive bias, Vision-Transformers, receptive fields...
It was the most technical episode I've recorded so far, and I hope you like it! By the way, it worth listening to this episode with the Transformers paper.
Paper Transformers: https://arxiv.org/pdf/1706.03762.pdf
Link of OpenAI explaining Next Token Prediction: https://www.linkedin.com/posts/zainhas_the-most-clearest-and-crisp-explanation-ugcPost-7132561633280692224-63AX?utm_source=share&utm_medium=member_desktop
Instagram: https://www.instagram.com/podcast.lifewithai/
Linkedin: https://www.linkedin.com/company/life-with-ai
5
22 ratings
Hey guys, in this episode I explain most of what I know about Transformers. I talk about the architecture, the attention formula, encoder, decoder, self-supervised learning, positional encoding, tokenization, inductive bias, Vision-Transformers, receptive fields...
It was the most technical episode I've recorded so far, and I hope you like it! By the way, it worth listening to this episode with the Transformers paper.
Paper Transformers: https://arxiv.org/pdf/1706.03762.pdf
Link of OpenAI explaining Next Token Prediction: https://www.linkedin.com/posts/zainhas_the-most-clearest-and-crisp-explanation-ugcPost-7132561633280692224-63AX?utm_source=share&utm_medium=member_desktop
Instagram: https://www.instagram.com/podcast.lifewithai/
Linkedin: https://www.linkedin.com/company/life-with-ai
223,325 Listeners
764 Listeners