lironshapira... more
Share Doom Debates
Share to email
Share to Facebook
Share to X
In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.
00:00 Introduction
01:55 Followup to my MSLT reaction episode
03:48 Double Crux
04:53 LLMs: Finite State Automata or Turing Machines?
16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky
17:29 How Will AGI Literally Kill Us?
33:53 Roon
37:38 Prof. Lee Cronin
40:48 Defining AI Creativity
43:44 Naval Ravikant
46:57 Pascal's Scam
54:10 Martin Casado and SB 1047
01:12:26 Final Thoughts
Links referenced in the episode:
* Eliezer Yudkowsky’s interview on the Logan Bartlett Show. Highly recommended: https://www.youtube.com/watch?v=_8q9bjNHeSo
* Double Crux, the core rationalist technique I use when I’m “debating”: https://www.lesswrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-mutual-understanding
* The problem with arguing “by definition”, a classic LessWrong post: https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition
Twitter people referenced:
* Amjad Masad: https://x.com/amasad
* Eliezer Yudkowsky: https://x.com/esyudkowsky
* Helen Toner: https://x.com/hlntnr
* Roon: https://x.com/tszzl
* Lee Cronin: https://x.com/leecronin
* Naval Ravikant: https://x.com/naval
* Geoffrey Miller: https://x.com/primalpoly
* Martin Casado: https://x.com/martin_casado
* Yoshua Bengio: https://x.com/yoshua_bengio
* Your boy: https://x.com/liron
Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates. Thanks for watching.
How smart is OpenAI’s new model, o1? What does “reasoning” ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?
Dr. Tim Scarfe and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!
00:00 Introduction
02:14 Computability Theory
03:40 Turing Machines
07:04 Complexity Theory and AI
23:47 Reasoning
44:24 o1
47:00 Finding gold in the Sahara
56:20 Self-Supervised Learning and Chain of Thought
01:04:01 The Miracle of AI Optimization
01:23:57 Collective Intelligence
01:25:54 The Argument Against LLMs' Reasoning
01:49:29 The Swiss Cheese Metaphor for AI Knowledge
02:02:37 Final Thoughts
Original source: https://www.youtube.com/watch?v=nO6sDk6vO0g
Follow Machine Learning Street Talk: https://www.youtube.com/@MachineLearningStreetTalk
Zvi Mowshowitz's authoritative GPT-o1 post: https://thezvi.wordpress.com/2024/09/16/gpt-4o1/
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Yuval Noah Harari is a historian, philosopher, and bestselling author known for his thought-provoking works on human history, the future, and our evolving relationship with technology. His 2011 book, Sapiens: A Brief History of Humankind, took the world by storm, offering a sweeping overview of human history from the emergence of Homo sapiens to the present day.
Harari just published a new book which is largely about AI. It’s called Nexus: A Brief History of Information Networks from the Stone Age to AI. Let’s go through the latest interview he did as part of his book tour to see where he stands on AI extinction risk.
00:00 Introduction
04:30 Defining AI vs. non-AI
20:43 AI and Language Mastery
29:37 AI's Potential for Manipulation
31:30 Information is Connection?
37:48 AI and Job Displacement
48:22 Consciousness vs. Intelligence
52:02 The Alignment Problem
59:33 Final Thoughts
Source podcast: https://www.youtube.com/watch?v=78YN1e8UXdM
Follow Yuval Noah Harari: x.com/harari_yuval
Follow Steven Bartlett, host of Diary of a CEO: x.com/StevenBartlett
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
It's finally here, the Doom Debates / Dr. Phil crossover episode you've all been asking for 😂
The full episode is called “AI: The Future of Education?"
While the main focus was AI in education, I'm glad the show briefly touched on how we're all gonna die. Everything in the show related to AI extinction is clipped here.
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Dr. Roman Yampolskiy is the director of the Cyber Security Lab at the University of Louisville. His new book is called AI: Unexplainable, Unpredictable, Uncontrollable.
Roman’s P(doom) from AGI is a whopping 99.999%, vastly greater than my P(doom) of 50%. It’s a rare debate when I’m LESS doomy than my opponent!
This is a cross-post from the For Humanity podcast hosted by John Sherman. For Humanity is basically a sister show of Doom Debates. Highly recommend subscribing!
00:00 John Sherman’s Intro
05:21 Diverging Views on AI Safety and Control
12:24 The Challenge of Defining Human Values for AI
18:04 Risks of Superintelligent AI and Potential Solutions
33:41 The Case for Narrow AI
45:21 The Concept of Utopia
48:33 AI's Utility Function and Human Values
55:48 Challenges in AI Safety Research
01:05:23 Breeding Program Proposal
01:14:05 The Reality of AI Regulation
01:18:04 Concluding Thoughts
01:23:19 Celebration of Life
This episode on For Humanity’s channel: https://www.youtube.com/watch?v=KcjLCZcBFoQ
For Humanity on YouTube: https://www.youtube.com/@ForHumanityPodcast
For Humanity on X: https://x.com/ForHumanityPod
Buy Roman’s new book: https://www.amazon.com/Unexplainable-Unpredictable-Uncontrollable-Artificial-Intelligence/dp/103257626X
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Jobst Landgrebe, co-author of Why Machines Will Never Rule The World: Artificial Intelligence Without Fear, argues that AI is fundamentally limited in achieving human-like intelligence or consciousness due to the complexities of the human brain which are beyond mathematical modeling.
Contrary to my view, Jobst has a very low opinion of what machines will be able to achieve in the coming years and decades.
He’s also a devout Christian, which makes our clash of perspectives funnier.
00:00 Introduction
03:12 AI Is Just Pattern Recognition?
06:46 Mathematics and the Limits of AI
12:56 Complex Systems and Thermodynamics
33:40 Transhumanism and Genetic Engineering
47:48 Materialism
49:35 Transhumanism as Neo-Paganism
01:02:38 AI in Warfare
01:11:55 Is This Science?
01:25:46 Conclusion
Source podcast: https://www.youtube.com/watch?v=xrlT1LQSyNU
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan.
Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers.
I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintelligent god.
00:00 Introduction
01:21 Arvind’s Perspective on AI
02:07 Debating AI's Compute and Performance
03:59 Synthetic Data vs. Real Data
05:59 The Role of Compute in AI Advancement
07:30 Challenges in AI Predictions
26:30 AI in Organizations and Tacit Knowledge
33:32 The Future of AI: Exponential Growth or Plateau?
36:26 Relevance of Benchmarks
39:02 AGI
40:59 Historical Predictions
46:28 OpenAI vs. Anthropic
52:13 Regulating AI
56:12 AI as a Weapon
01:02:43 Sci-Fi
01:07:28 Conclusion
Original source: https://www.youtube.com/watch?v=8CvjVAyB4O4
Follow Arvind Narayanan: x.com/random_walker
Follow Harry Stebbings: x.com/HarryStebbings
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
Today I’m reacting to the Bret Weinstein’s recent appearance on the Diary of a CEO podcast with Steven Bartlett. Bret is an evolutionary biologist known for his outspoken views on social and political issues.
Bret gets off to a promising start, saying that AI risk should be “top of mind” and poses “five existential threats”. But his analysis is shallow and ad-hoc, and ends in him dismissing the idea of trying to use regulation as a tool to save our species from a recognized existential threat.
I believe we can raise the level of AI doom discourse by calling out these kinds of basic flaws in popular media on the subject.
00:00 Introduction
02:02 Existential Threats from AI
03:32 The Paperclip Problem
04:53 Moral Implications of Ending Suffering
06:31 Inner vs. Outer Alignment
08:41 AI as a Tool for Malicious Actors
10:31 Attack vs. Defense in AI
18:12 The Event Horizon of AI
21:42 Is Language More Prime Than Intelligence?
38:38 AI and the Danger of Echo Chambers
46:59 AI Regulation
51:03 Mechanistic Interpretability
56:52 Final Thoughts
Original source: youtube.com/watch?v=_cFu-b5lTMU
Follow Bret Weinstein: x.com/BretWeinstein
Follow Steven Bartlett: x.com/StevenBartlett
Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching.
California's SB 1047 bill, authored by CA State Senator Scott Wiener, is the leading attempt by a US state to regulate catastrophic risks from frontier AI in the wake of President Biden's 2023 AI Executive Order.
Today’s debate:
Holly Elmore, Executive Director of Pause AI US, representing Pro- SB 1047
Greg Tanaka, Palo Alto City Councilmember, representing Anti- SB 1047
Key Bill Supporters: Geoffrey Hinton, Yoshua Bengio, Anthropic, PauseAI, and about a 2/3 majority of California voters surveyed.
Key Bill Opponents: OpenAI, Google, Meta, Y Combinator, Andreessen Horowitz
Links
Greg mentioned that the "Supporters & Opponents" tab on this page lists organizations who registered their support and opposition. The vast majority of organizations listed here registered support against the bill: https://digitaldemocracy.calmatters.org/bills/ca_202320240sb1047
Holly mentioned surveys of California voters showing popular support for the bill:1. Center for AI Safety survey shows 77% support: https://drive.google.com/file/d/1wmvstgKo0kozd3tShPagDr1k0uAuzdDM/view2. Future of Life Institute survey shows 59% support: https://futureoflife.org/ai-policy/poll-shows-popularity-of-ca-sb1047/
Follow Holly: x.com/ilex_ulmus
Follow Greg: x.com/GregTanaka
Join the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for watching.
Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.
I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.
00:00 Introduction
01:08 David's Response and Engagement
03:02 The Corrigibility Problem
05:38 Nirvana Fallacy
10:57 Prophecy and Faith-Based Assertions
22:47 AI Coexistence with Humanity
35:17 Does Curiosity Make AI Value Humans?
38:56 Instrumental Convergence and AI's Goals
46:14 The Fermi Paradox and AI's Expansion
51:51 The Future of Human and AI Coexistence
01:04:56 Concluding Thoughts
Join the conversation on DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of extinction. Thanks for listening.
The podcast currently has 33 episodes available.