ai.u crew discuss a LinkedIn post by Microsoft AI CEO Mustafa Suleyman (co-founder of DeepMind and Inflection AI) and his argument that the next decade of AI will be shaped more by what we choose not to build. They unpack three themes: (1) AI should not pretend to suffer or have an inner life; its value is in “inhuman strengths” like endless patience, tireless explanations, and calm reasoning. The hosts debate AGI vs superintelligence and distinguish behavioral realism from moral status, warning against attributing consciousness or rights to AI. (2) Suleyman’s stance against AI romance/erotica and concerns about dependency, isolation, and “AI psychosis,” noting Microsoft Copilot will not allow those use cases; they contrast risky attachment-driven products with beneficial roleplay for training, interviews, or preparing difficult conversations, while acknowledging blurred lines and the need for safeguards. (3) They address “unchecked superintelligence,” agreeing humans should remain in the driver’s seat and favoring domain-focused, humanist superintelligence (e.g., medicine, clean energy) rather than all-powerful systems; they explore whether humans become bottlenecks and emphasize keeping AI as a tool that supports human flourishing, not a replacement for human relationships or agency. The episode closes with plans to invite Suleyman onto the show and a request for listener feedback.
00:00 Welcome to AI Unprompted + Why This Episode Is Different
00:56 Who Is Mustafa Suleyman? DeepMind, Inflection, and Now Microsoft AI
02:03 The Provocative Thesis: The Next Decade Is About What We Don’t Build
02:35 Point #1: Don’t Build AI That ‘Suffers’—Lean Into Inhuman Strengths
07:01 AGI vs Superintelligence: Do Emotions or Social IQ Matter?
10:14 Endless Patience vs ‘Moral Status’: Why Human-Like Talk Isn’t Personhood
16:49 Point #2: Romance/Erotica Bots, Dependency, and ‘AI Psychosis’ Risks
19:25 Roleplay for Training vs Intimacy: Where to Draw the Line
22:43 Inevitable Human-Likeness: Guardrails, Labels, and Protecting Users
26:56 The ‘Why’ Behind AI Products: Engagement, Revenue, and Ethical Design Tensions
27:58 Engagement vs. Ethics: When AI Is Built to Manipulate
28:56 Accelerationism & Who Gets to Set AI’s Moral Limits?
30:13 Mustafa’s Case for Slowing Down (So We Don’t Lose the Plot)
31:15 Tool, Not a Being: The Danger of Assigning AI Consciousness & Rights
33:30 Sycophantic Bots, Weakening Pushback, and Relationship Substitution
36:57 Social Media as the Warning Label for AI Attachment
37:49 No Unchecked Superintelligence: Domain-Focused Models + Humans in the Driver’s Seat
41:16 When Humans Become the Bottleneck: The Temptation to Hand Over Agency
42:51 AI as ‘Our Own God’? What We Lose When We Outsource Life’s Meaning
48:00 Workload Creep & Remembering What Makes Us Human (Plus Final Sign-off)
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit aiunprompted.substack.com