
Sign up to save your podcasts
Or
In this episode of Generative AI 101, go on an insider’s tour of a large language model (LLM). Discover how each component, from the transformer architecture and positional encoding to the multi-head attention layers and feed-forward neural networks, contributes to creating intelligent, coherent text. We’ll explore tokenization and resource management techniques like mixed-precision training and model parallelism. Join us for a fascinating look at the complex, finely-tuned process that powers modern AI, turning raw text into human-like responses.
Connect with Emily Laird on LinkedIn
4.8
1212 ratings
In this episode of Generative AI 101, go on an insider’s tour of a large language model (LLM). Discover how each component, from the transformer architecture and positional encoding to the multi-head attention layers and feed-forward neural networks, contributes to creating intelligent, coherent text. We’ll explore tokenization and resource management techniques like mixed-precision training and model parallelism. Join us for a fascinating look at the complex, finely-tuned process that powers modern AI, turning raw text into human-like responses.
Connect with Emily Laird on LinkedIn
295 Listeners
321 Listeners
147 Listeners
196 Listeners
275 Listeners
153 Listeners
126 Listeners
143 Listeners
193 Listeners
420 Listeners
232 Listeners
65 Listeners
28 Listeners
40 Listeners