
Sign up to save your podcasts
Or


This week on Net Society, we’re joined by special guest Jeremy Nixon to dig into what today’s AI boom actually is and what it is not. The conversation opens with Jeremy’s path through early autonomy and self-driving, and why that era made it impossible to dismiss machine intelligence as hype. From there, the group zooms out into bigger questions about intelligence itself, contrasting “alien” intelligence with collective intelligence, and treating LLMs less like minds and more like powerful simulators. The episode then moves into creativity, measurement, and the real constraint on progress, which is not generating ideas but selecting and validating them. In the second half, the discussion turns to how LLMs were built, why major labs and incumbents made different bets, and what that says about institutional risk and ambition. The episode closes with a sharp look at AI apocalypse culture, the moral frameworks that grew around it, and how open models, game theory, and product reality collide with the temptation to turn AI into a new kind of religion.
Mentioned in the episode
Special Guest Jeremy Nixon https://x.com/JvNixon
AGI House https://x.com/agihousesf
Thiel on Progress and Stagnation https://www.lesswrong.com/posts/Xqcorq5EyJBpZcCrN/thiel-on-progress-and-stagnation
Show & Hosts
Net Society: https://x.com/net__society
Aaron Wright: https://x.com/awrigh01
Chris F: https://x.com/ChrisF_0x
Derek Edwards: https://x.com/derekedws
Priyanka Desai: https://x.com/pridesai
Production & Marketing
Editor: https://x.com/0xFnkl
Social: https://x.com/v_kirra
By NET Society5
66 ratings
This week on Net Society, we’re joined by special guest Jeremy Nixon to dig into what today’s AI boom actually is and what it is not. The conversation opens with Jeremy’s path through early autonomy and self-driving, and why that era made it impossible to dismiss machine intelligence as hype. From there, the group zooms out into bigger questions about intelligence itself, contrasting “alien” intelligence with collective intelligence, and treating LLMs less like minds and more like powerful simulators. The episode then moves into creativity, measurement, and the real constraint on progress, which is not generating ideas but selecting and validating them. In the second half, the discussion turns to how LLMs were built, why major labs and incumbents made different bets, and what that says about institutional risk and ambition. The episode closes with a sharp look at AI apocalypse culture, the moral frameworks that grew around it, and how open models, game theory, and product reality collide with the temptation to turn AI into a new kind of religion.
Mentioned in the episode
Special Guest Jeremy Nixon https://x.com/JvNixon
AGI House https://x.com/agihousesf
Thiel on Progress and Stagnation https://www.lesswrong.com/posts/Xqcorq5EyJBpZcCrN/thiel-on-progress-and-stagnation
Show & Hosts
Net Society: https://x.com/net__society
Aaron Wright: https://x.com/awrigh01
Chris F: https://x.com/ChrisF_0x
Derek Edwards: https://x.com/derekedws
Priyanka Desai: https://x.com/pridesai
Production & Marketing
Editor: https://x.com/0xFnkl
Social: https://x.com/v_kirra

112,999 Listeners

283 Listeners

5,558 Listeners

14 Listeners

4 Listeners

5 Listeners