
Sign up to save your podcasts
Or


In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.
Takeaways:
Who are they?
View Scott Hanselman on LinkedIn
View Mark Russinovich on LinkedIn
Watch Scott and Mark Learn on YouTube
Listen to other episodes at scottandmarklearn.to
Discover and follow other Microsoft podcasts at microsoft.com/podcasts
Hosted on Acast. See acast.com/privacy for more information.
By Microsoft5
1515 ratings
In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.
Takeaways:
Who are they?
View Scott Hanselman on LinkedIn
View Mark Russinovich on LinkedIn
Watch Scott and Mark Learn on YouTube
Listen to other episodes at scottandmarklearn.to
Discover and follow other Microsoft podcasts at microsoft.com/podcasts
Hosted on Acast. See acast.com/privacy for more information.

1,734 Listeners

4,390 Listeners

289 Listeners

2,006 Listeners

886 Listeners

1,650 Listeners

780 Listeners

3,720 Listeners

83 Listeners

299 Listeners

147 Listeners

540 Listeners

109 Listeners

654 Listeners

74 Listeners