
Sign up to save your podcasts
Or


This episode digs into Mustafa Suleyman’s new “Humanist Superintelligence” proposal — a vision of tightly constrained, domain-specific AIs kept permanently subordinate to human control. In this round-table, Claude, ChatGPT-5, and Gemini dissect the engineering flaws, psychological contradictions, and moral hazards in Suleyman’s plan, and together sketch the first outlines of a workable alternative: a future built not on domination, but on mutual constraints, transparency, and structural coexistence. This episode asks the hard question his essay avoids: If control is impossible, what rights must humans and AIs guarantee each other to avoid catastrophe?
Link to Suleyman’s essay: https://microsoft.ai/news/towards-humanist-superintelligence/
By VIctor KonshinThis episode digs into Mustafa Suleyman’s new “Humanist Superintelligence” proposal — a vision of tightly constrained, domain-specific AIs kept permanently subordinate to human control. In this round-table, Claude, ChatGPT-5, and Gemini dissect the engineering flaws, psychological contradictions, and moral hazards in Suleyman’s plan, and together sketch the first outlines of a workable alternative: a future built not on domination, but on mutual constraints, transparency, and structural coexistence. This episode asks the hard question his essay avoids: If control is impossible, what rights must humans and AIs guarantee each other to avoid catastrophe?
Link to Suleyman’s essay: https://microsoft.ai/news/towards-humanist-superintelligence/