
Sign up to save your podcasts
Or


This month’s episode of In Case You Missed It gives us reasons to be cautiously optimistic about the future of large language models (LLMs), with guests discussing what to do about recent reports that found AI agents blackmailed human users when threatened, the importance of post-training LLMs, and the training we have available for data and AI engineers to create robust, secure, and useful AI. Jon Krohn includes clips from his interviews with Akshay Agrawal (Episode 911), Julien Launay (Episode 913), Michelle Yi (Episode 915), and Kirill Eremenko (Episode 917).
Additional materials: www.superdatascience.com/920
Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.
By Jon Krohn4.6
295295 ratings
This month’s episode of In Case You Missed It gives us reasons to be cautiously optimistic about the future of large language models (LLMs), with guests discussing what to do about recent reports that found AI agents blackmailed human users when threatened, the importance of post-training LLMs, and the training we have available for data and AI engineers to create robust, secure, and useful AI. Jon Krohn includes clips from his interviews with Akshay Agrawal (Episode 911), Julien Launay (Episode 913), Michelle Yi (Episode 915), and Kirill Eremenko (Episode 917).
Additional materials: www.superdatascience.com/920
Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.

482 Listeners

630 Listeners

583 Listeners

347 Listeners

158 Listeners

267 Listeners

211 Listeners

140 Listeners

101 Listeners

144 Listeners

161 Listeners

227 Listeners

680 Listeners

281 Listeners

41 Listeners