
Sign up to save your podcasts
Or


In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.
Takeaways:
Who are they?
View Scott Hanselman on LinkedIn
View Mark Russinovich on LinkedIn
Watch Scott and Mark Learn on YouTube
Listen to other episodes at scottandmarklearn.to
Discover and follow other Microsoft podcasts at microsoft.com/podcasts
Hosted on Acast. See acast.com/privacy for more information.
By Microsoft5
1515 ratings
In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.
Takeaways:
Who are they?
View Scott Hanselman on LinkedIn
View Mark Russinovich on LinkedIn
Watch Scott and Mark Learn on YouTube
Listen to other episodes at scottandmarklearn.to
Discover and follow other Microsoft podcasts at microsoft.com/podcasts
Hosted on Acast. See acast.com/privacy for more information.

1,713 Listeners

4,420 Listeners

288 Listeners

2,011 Listeners

888 Listeners

1,649 Listeners

781 Listeners

3,722 Listeners

83 Listeners

306 Listeners

149 Listeners

551 Listeners

108 Listeners

688 Listeners

77 Listeners