
Sign up to save your podcasts
Or


AI is moving from copilots to autonomous agents, and most tech leaders are not prepared for what that shift means. Craig walks through the real risks behind superintelligence, why AI checking AI is becoming inevitable, and how CTOs and CIOs can design safer, more resilient systems before autonomy outpaces human oversight
Rather than focusing on hype, this episode dives into the alignment problem, the limits of guardrails, and why monolithic black box models may be the wrong long term architecture.
You will hear a practical path forward for tech leaders who are already overwhelmed by AI generated code, agent frameworks, and rapidly evolving models
If you are leading engineering, AI, or technology strategy, this episode will challenge how you think about safety, governance, autonomy, and the future role of the CTO in an AI driven world.
Key Takeaways
Dr. Craig A. Kaplan is a renowned expert in artificial intelligence, artificial general intelligence, and superintelligence, with a focus on collective intelligence and quantitative modeling. He is the Founder of Superintellligence.com and CEO and founder of iQ Company, a consulting firm dedicated to advanced AGI and SI systems. Previously, he founded PredictWallStreet, a financial services firm that powered top hedge fund performance by leveraging the collective intelligence of retail investors. Dr. Kaplan has authored a book, published extensively in scientific journals, and holds numerous patents on AI-related technologies.
Chapters00:00 How far is AGI?
06:55 What is P(doom)?
16:43 AI Reviewing AI Output
20:54 Dealing with Bad Actors (Human or AI)
25:36 Approaching AI as a Small Scale CTO
30:43 Democracy of AI Agents
35:15 AI Safety Conferences
40:03 AI Models, Open-Source or Big Company?
45:05 Is AI Adoption Keeping Up?
Where to find Craig
By Mark WormgoorAI is moving from copilots to autonomous agents, and most tech leaders are not prepared for what that shift means. Craig walks through the real risks behind superintelligence, why AI checking AI is becoming inevitable, and how CTOs and CIOs can design safer, more resilient systems before autonomy outpaces human oversight
Rather than focusing on hype, this episode dives into the alignment problem, the limits of guardrails, and why monolithic black box models may be the wrong long term architecture.
You will hear a practical path forward for tech leaders who are already overwhelmed by AI generated code, agent frameworks, and rapidly evolving models
If you are leading engineering, AI, or technology strategy, this episode will challenge how you think about safety, governance, autonomy, and the future role of the CTO in an AI driven world.
Key Takeaways
Dr. Craig A. Kaplan is a renowned expert in artificial intelligence, artificial general intelligence, and superintelligence, with a focus on collective intelligence and quantitative modeling. He is the Founder of Superintellligence.com and CEO and founder of iQ Company, a consulting firm dedicated to advanced AGI and SI systems. Previously, he founded PredictWallStreet, a financial services firm that powered top hedge fund performance by leveraging the collective intelligence of retail investors. Dr. Kaplan has authored a book, published extensively in scientific journals, and holds numerous patents on AI-related technologies.
Chapters00:00 How far is AGI?
06:55 What is P(doom)?
16:43 AI Reviewing AI Output
20:54 Dealing with Bad Actors (Human or AI)
25:36 Approaching AI as a Small Scale CTO
30:43 Democracy of AI Agents
35:15 AI Safety Conferences
40:03 AI Models, Open-Source or Big Company?
45:05 Is AI Adoption Keeping Up?
Where to find Craig