
Sign up to save your podcasts
Or
In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.
Takeaways:
Who are they?
View Scott Hanselman on LinkedIn
View Mark Russinovich on LinkedIn
Watch Scott and Mark Learn on YouTube
Listen to other episodes at scottandmarklearn.to
Discover and follow other Microsoft podcasts at microsoft.com/podcasts
Hosted on Acast. See acast.com/privacy for more information.
5
1515 ratings
In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.
Takeaways:
Who are they?
View Scott Hanselman on LinkedIn
View Mark Russinovich on LinkedIn
Watch Scott and Mark Learn on YouTube
Listen to other episodes at scottandmarklearn.to
Discover and follow other Microsoft podcasts at microsoft.com/podcasts
Hosted on Acast. See acast.com/privacy for more information.
3,051 Listeners
1,999 Listeners
380 Listeners
244 Listeners
880 Listeners
477 Listeners
774 Listeners
288 Listeners
1,076 Listeners
1,080 Listeners
42 Listeners
62 Listeners
96 Listeners
547 Listeners
54 Listeners