
Sign up to save your podcasts
Or


Greg Michaelson speaks to Jon Krohn about the latest developments at Zerve, an operating system for developing and delivering data and AI products, including a revolutionary feature allowing users to run multiple parts of a program’s code at once and without extra costs. You’ll also hear why LLMs might spell trouble for SaaS companies, Greg’s ‘good-cop, bad-cop’ routine that improves LLM responses, and how RAG (retrieval-augmented generation) can be deployed to create even more powerful AI applications.
Additional materials: www.superdatascience.com/879
This episode is brought to you by Trainium2, the latest AI chip from AWS and by the Dell AI Factory with NVIDIA.
Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.
In this episode you will learn:
By Jon Krohn4.6
295295 ratings
Greg Michaelson speaks to Jon Krohn about the latest developments at Zerve, an operating system for developing and delivering data and AI products, including a revolutionary feature allowing users to run multiple parts of a program’s code at once and without extra costs. You’ll also hear why LLMs might spell trouble for SaaS companies, Greg’s ‘good-cop, bad-cop’ routine that improves LLM responses, and how RAG (retrieval-augmented generation) can be deployed to create even more powerful AI applications.
Additional materials: www.superdatascience.com/879
This episode is brought to you by Trainium2, the latest AI chip from AWS and by the Dell AI Factory with NVIDIA.
Interested in sponsoring a SuperDataScience Podcast episode? Email [email protected] for sponsorship information.
In this episode you will learn:

479 Listeners

624 Listeners

585 Listeners

332 Listeners

152 Listeners

269 Listeners

210 Listeners

142 Listeners

95 Listeners

135 Listeners

152 Listeners

225 Listeners

607 Listeners

272 Listeners

39 Listeners