
Sign up to save your podcasts
Or
Alec also talks through some changing recommendations in the AI space. While RAG (Retrieval Augmented Generation) has been the go-to approach, there’s a shift happening—more MVPs are starting to recommend fine-tuning models again. Alec breaks down the differences between the two methods, when each makes sense, and how the barriers to entry are getting lower with new tools and samples.
He also previews some upcoming talks, including submissions to Global AI Bootcamp and content for Azure Universe. There’s a quick mention of open-source quickstart projects, LinkedIn Learning courses, and conferences like Brian Gorman’s May conference (formerly Sci-Fi DevCon).
📌 Topics in this episode:
If you're interested in AI, LLMs, or just trying things out for the first time, there's something here for you.
Alec also talks through some changing recommendations in the AI space. While RAG (Retrieval Augmented Generation) has been the go-to approach, there’s a shift happening—more MVPs are starting to recommend fine-tuning models again. Alec breaks down the differences between the two methods, when each makes sense, and how the barriers to entry are getting lower with new tools and samples.
He also previews some upcoming talks, including submissions to Global AI Bootcamp and content for Azure Universe. There’s a quick mention of open-source quickstart projects, LinkedIn Learning courses, and conferences like Brian Gorman’s May conference (formerly Sci-Fi DevCon).
📌 Topics in this episode:
If you're interested in AI, LLMs, or just trying things out for the first time, there's something here for you.