
Sign up to save your podcasts
Or


This month’s episode of In Case You Missed It gives us reasons to be cautiously optimistic about the future of large language models (LLMs), with guests discussing what to do about recent reports that found AI agents blackmailed human users when threatened, the importance of post-training LLMs, and the training we have available for data and AI engineers to create robust, secure, and useful AI. Jon Krohn includes clips from his interviews with Akshay Agrawal (Episode 911), Julien Launay (Episode 913), Michelle Yi (Episode 915), and Kirill Eremenko (Episode 917).
Additional materials: www.superdatascience.com/920
Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.
By Jon Krohn4.6
295295 ratings
This month’s episode of In Case You Missed It gives us reasons to be cautiously optimistic about the future of large language models (LLMs), with guests discussing what to do about recent reports that found AI agents blackmailed human users when threatened, the importance of post-training LLMs, and the training we have available for data and AI engineers to create robust, secure, and useful AI. Jon Krohn includes clips from his interviews with Akshay Agrawal (Episode 911), Julien Launay (Episode 913), Michelle Yi (Episode 915), and Kirill Eremenko (Episode 917).
Additional materials: www.superdatascience.com/920
Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.

479 Listeners

624 Listeners

585 Listeners

332 Listeners

152 Listeners

269 Listeners

210 Listeners

142 Listeners

95 Listeners

135 Listeners

152 Listeners

225 Listeners

607 Listeners

272 Listeners

39 Listeners