
Sign up to save your podcasts
Or
If you’re a premium subscriber
Add the private feed to your podcast app at https://add.lennysreads.com
You’ve probably heard terms like LLM, transformer, and hallucination, but do you really know what they mean?
In this episode, I walk through 20 of the most common AI terms with dead-simple explanations you can actually understand (and use).
In this episode, you’ll learn
• What a “model” actually is
• The difference between pre-training, fine-tuning, and RLHF
• What transformers are—and why they changed everything
• How prompt engineering and RAG improve model outputs
• What AGI and ASI really mean
• The difference between LLMs, GenAI, and GPT
• Why models hallucinate (and how to prevent it)
• What synthetic data is—and why it matters
• How vibe coding works and what agents can actually do
• What MCP, inference, and tokens are in plain English
Referenced
• A complete guide on RLHF
• AGI vs ASI
• Andrej Karpathy on LLMs
• Andrej Karpathy on vibe coding
• Anthropic’s guide on building effective agents
• Anthropic’s guide to reducing hallucinations
• Fine-tuning vs RAG vs prompt engineering
• Guide to model context protocol (MCP)
• How LLMs work
• How fine-tuning works
• How top models tokenize words
• How training and pre-training works
• Ilya Sutskever on AGI
• Ilya Sutskever on next-word prediction
• Lenny’s Podcast on prompt engineering
• Make product management fun again with AI agents
• RLHF explainer
• Sam Altman on synthetic data
• Technical deep dive on transformers
• What are transformers?
Subscribe: YouTube | Apple | Spotify
Follow Lenny: Twitter/X | LinkedIn | Podcast
About
Welcome to Lenny’s Reads, where every week you’ll find a fresh audio version of my newsletter about building product, driving growth, and accelerating your career, read to you by the soothing voice of Lennybot.
4.2
55 ratings
If you’re a premium subscriber
Add the private feed to your podcast app at https://add.lennysreads.com
You’ve probably heard terms like LLM, transformer, and hallucination, but do you really know what they mean?
In this episode, I walk through 20 of the most common AI terms with dead-simple explanations you can actually understand (and use).
In this episode, you’ll learn
• What a “model” actually is
• The difference between pre-training, fine-tuning, and RLHF
• What transformers are—and why they changed everything
• How prompt engineering and RAG improve model outputs
• What AGI and ASI really mean
• The difference between LLMs, GenAI, and GPT
• Why models hallucinate (and how to prevent it)
• What synthetic data is—and why it matters
• How vibe coding works and what agents can actually do
• What MCP, inference, and tokens are in plain English
Referenced
• A complete guide on RLHF
• AGI vs ASI
• Andrej Karpathy on LLMs
• Andrej Karpathy on vibe coding
• Anthropic’s guide on building effective agents
• Anthropic’s guide to reducing hallucinations
• Fine-tuning vs RAG vs prompt engineering
• Guide to model context protocol (MCP)
• How LLMs work
• How fine-tuning works
• How top models tokenize words
• How training and pre-training works
• Ilya Sutskever on AGI
• Ilya Sutskever on next-word prediction
• Lenny’s Podcast on prompt engineering
• Make product management fun again with AI agents
• RLHF explainer
• Sam Altman on synthetic data
• Technical deep dive on transformers
• What are transformers?
Subscribe: YouTube | Apple | Spotify
Follow Lenny: Twitter/X | LinkedIn | Podcast
About
Welcome to Lenny’s Reads, where every week you’ll find a fresh audio version of my newsletter about building product, driving growth, and accelerating your career, read to you by the soothing voice of Lennybot.
1,032 Listeners
519 Listeners
217 Listeners
2,961 Listeners
401 Listeners
442 Listeners
121 Listeners
91 Listeners
135 Listeners
461 Listeners
22 Listeners
43 Listeners
17 Listeners
4 Listeners
28 Listeners