
Sign up to save your podcasts
Or


In this episode of Generative AI 101, go on an insider’s tour of a large language model (LLM). Discover how each component, from the transformer architecture and positional encoding to the multi-head attention layers and feed-forward neural networks, contributes to creating intelligent, coherent text. We’ll explore tokenization and resource management techniques like mixed-precision training and model parallelism. Join us for a fascinating look at the complex, finely-tuned process that powers modern AI, turning raw text into human-like responses.
Connect with Emily Laird on LinkedIn
By Emily Laird4.6
2020 ratings
In this episode of Generative AI 101, go on an insider’s tour of a large language model (LLM). Discover how each component, from the transformer architecture and positional encoding to the multi-head attention layers and feed-forward neural networks, contributes to creating intelligent, coherent text. We’ll explore tokenization and resource management techniques like mixed-precision training and model parallelism. Join us for a fascinating look at the complex, finely-tuned process that powers modern AI, turning raw text into human-like responses.
Connect with Emily Laird on LinkedIn

32,246 Listeners

536 Listeners

1,649 Listeners

56,944 Listeners

8,876 Listeners

175 Listeners

212 Listeners

27,584 Listeners

5,109 Listeners

10,254 Listeners

16,525 Listeners

1,788 Listeners

688 Listeners

112 Listeners

0 Listeners