Share The Retort AI Podcast
Share to email
Share to Facebook
Share to X
By Thomas Krendl Gilbert and Nathan Lambert
4.7
99 ratings
The podcast currently has 34 episodes available.
Tom and Nate catch up on recent events (before the OpenAI o1 release) and opportunities in transparency/policy. We recap the legendary scam of Matt from IT department, why disclosing the outcomes of process is not enough, and more. This is a great episode on understanding why the process technology was birthed from is just as important as the outcome!
Some links:
* Nathan's post on Model Specs for regulation https://www.interconnects.ai/p/a-post-training-approach-to-ai-regulation
* Nathan's post on inference spend https://www.interconnects.ai/p/openai-strawberry-and-inference-scaling-laws
Send your questions to mail at retortai dot com
Tom and Nate catch up on core themes of AI after a somewhat unintended summer break. We discuss the moral groundings and philosophy of what we're building, our travels, The Anxious Generation, AGI obsessions, an update on AI Ethics vs. AI Safety, and plenty more in between.
As always, contact us at [email protected]
Some links we mention in the episode:
* The Emotional Dog and its Rational Tail https://motherjones.com/wp-content/uploads/emotional_dog_and_rational_tail.pdf
* The Anxious Generation https://www.amazon.com/Anxious-Generation-Rewiring-Childhood-Epidemic/dp/0593655036
* Shadow Lake Lodge https://shadowlakelodge.com/
* Recent Dwarkesh Podcast https://www.dwarkeshpatel.com/p/joe-carlsmith
Tom and Nate catch up on the rapidly evolving (and political) space of AI regulation. We cover CA SB 1047, recent policing of data scraping, presidential appointees, antitrust intention vs. implementation, FLOP thresholds, and everything else touching the future of large ML models.
Nate's internet cut out, so this episode ends a little abruptly. Reach out with any questions to mail at retortai.com
Some links:
- night falls on the cumberlands https://en.wikipedia.org/wiki/Night_Comes_to_the_Cumberlands
- hillbilly elegy https://en.wikipedia.org/wiki/Hillbilly_Elegy
- wired piece on data https://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/
- nate's recent piece on AI regulation https://www.interconnects.ai/p/sb-1047-and-open-weights
00:00 Intro
01:19 Training Data and the Media
03:43 Norms, Power, and the Limits of Regulation
08:52 OpenAI's Business Model
12:33 Antitrust: The Essential Tool for Governing AI
17:11 Users as Afterthoughts
20:07 Depoliticizing AI
26:14 "Breaking Bad" & the AI Parallel
28:11 The "Little Tech" Agenda
31:03 Reframing the Narrative of Big Tech
32:20 "The Lean Startup" & AI's Uncertainty
Tom and Nate revisit one of their old ideas -- AI through the lens of public health infrastructure, and especially alignment. Sorry about Tom's glitchy audio, I figured it out after the fact that he was talking into the microphone at the wrong angle. Regardless, here are some links for this week. Links:
- Data foundry for AI https://scale.com/blog/scale-ai-series-f
- Information piece on Scale AI ($) https://www.theinformation.com/articles/why-a-14-billion-startup-is-now-hiring-phds-to-train-ai-from-their-living-rooms?shared=168f685a864ca709
- ChatGPT compounding math: https://chatgpt.com/share/2c19a357-acb2-441d-8203-946b74ce785c
contact us at mail at retortai dot com
00:00 Intro
00:39 Chicago's Tech Scene and the "The Bear"
01:22 AI and Public Health: A New Framework
08:17 Lessons for AI from Sanitation Infrastructure
12:58 The Mental Health Impact of Generative AI
23:28 Aligning AI with Diverse Societal Values
27:06 Power Dynamics in AI's Development
33:02 The Need for a Neutral AI Research Body (NAIRR)
36:57 New Regulations for a New Era of AI
41:05 Outro: Join the Conversation
Tom and Nate caught up last week (sorry for the editing delay) on the big two views of the AI future: Apple Intelligence and Situational Awareness (Nationalistic AI doom prevention). One of our best episodes, here are the links:
* The Kekulé Problem https://en.wikipedia.org/wiki/The_Kekul%C3%A9_Problem
* Truth and Method https://en.wikipedia.org/wiki/Truth_and_Method
* Situational Awareness https://situational-awareness.ai/
00:00 A Hypothetical Life: From Germany to AGI
01:20 Leopold Aschenbrenner: Situational Awareness and Extrapolation
02:01 The Retort: Apple vs. Doomsday AI
03:40 Credentials and Social Choice Theory
05:14 Dissecting "Situational Awareness": Hype vs. Reality
07:16 The Limits of Language Models: Are They Really Intelligent?
11:04 Apple's Vision: AI for Consumers, Not Conquerors
13:53 Silicon Valley Myopia and the Geopolitics of AI
18:25 Beyond Benchmarks: The Scientist vs. The Engineer
22:04 What is Intelligence? The Narrowness of Human Fixation
24:32 A Growing Disrespect for Language?
27:40 The Power of Talking to Language Models
32:50 Language: Representation or Revelation?
38:54 The Future of Meaning: Will AI Obliterate Art?
45:32 A Vision for AI as Public Infrastructure
Tom and Nate catch up on many AI policy happenings recently. California's "anti open source" 1047 bill, the senate AI roadmap, Google's search snaifu, OpenAI's normal nonsense, and reader feedback! A bit of a mailbag. Enjoy.
00:00 Murky waters in AI policy
00:33 The Senate AI Roadmap
05:14 The Executive Branch Takes the Lead
08:33 California's Senate AI Bill
22:22 OpenAI's Two Audiences
28:53 The Problem with OpenAI Model Spec
39:50 A New World of AI Regulation
A bunch of links...
Data and society whitepaper: https://static1.squarespace.com/static/66465fcd83d1881b974fe099/t/664b866c9524f174acd7931c/1716225644575/24.05.18+-+AI+Shadow+Report+V4.pdf
https://senateshadowreport.com/
California bill
https://www.hyperdimensional.co/p/california-senate-passes-sb-1047
https://legiscan.com/CA/text/SB1047/id/2999979
Data walls
https://www.interconnects.ai/p/the-data-wall
Interconnects Merch
https://interconnects.myshopify.com/
Tom and Nate discuss two major OpenAI happenings in the last week. The popular one, the chat assistant, and what it reveals about OpenAI's worldview. We pair this with discussion of OpenAI's new Model Spec, which details their RLHF goals: https://cdn.openai.com/spec/model-spec-2024-05-08.html
This is a monumental week for AI. The product transition is completed, we can't just be researchers anymore.
00:00 Guess the Donkey Kong Character
00:50 OpenAI's New AI Girlfriend
07:08 OpenAI's Business Model and Responsible AI
08:45 GPT-2 Chatbot Thing and OpenAI's Weirdness
12:48 OpenAI and the Mystery Box
19:10 The Blurring Boundaries of Intimacy and Technology
22:05 Rousseau's Discourse on Inequality and the Impact of Technology
26:16 OpenAI's Model Spec and Its Objectives
30:10 The Unintelligibility of "Benefiting Humanity"
37:01 The Chain of Command and the Paradox of AI Love
45:46 The Form and Content of OpenAI's Model Spec
48:51 The Future of AI and Societal Disruptions
Tom and Nate discuss the shifting power landscape in AI. They try to discern what is special about Silicon Valley's grasp on the ecosystem and what other types of power (e.g. those in New York and Washington DC) will do to mobilize their influence.
Here's the one Tweet we referenced on the FAccT community: https://twitter.com/KLdivergence/status/1653843497932267520
00:00: Introduction and Cryptozoologists
02:00: DC and the National AI Research Resource (NAIR)
05:34: The Three Legs of the AI World: Silicon Valley, New York, and DC
11:00: The AI Safety vs. Ethics Debate
13:42: The Rise of the Third Entity: The Government's Role in AI
19:42: New York's Influence and the Power of Narrative
29:36: Silicon Valley's Insularity and the Need for Regulation
36:50: The Amazon Antitrust Paradox and the Shifting Landscape
48:20: The Energy Conundrum and the Need for Policy Solutions
56:34: Conclusion: Finding Common Ground and Building a Better Future for AI
Tom and Nate cover the state of the industry after Llama 3. Is Zuck the best storyteller in AI? Is he the best CEO? Are CEOs doing anything other than buying compute? We cover what it means to be successful at the highest level this week.
Links:
Dwarkesh interview with Zuck https://www.dwarkeshpatel.com/p/mark-zuckerberg
Capuchin monkey https://en.wikipedia.org/wiki/Capuchin_monkey
00:00 Introductions & advice from a wolf
00:45 Llama 3
07:15 Resources and investment required for large language models
14:10 What it means to be a leader in the rapidly evolving AI landscape
22:07 How much of AI progress is driven by stories vs resources
29:41 Critiquing the concept of Artificial General Intelligence (AGI)
38:10 Misappropriation of the term AGI by tech leaders
42:09 The future of open models and AI development
Tom and Nate catch up after a few weeks off the pod. We discuss what it means for the pace and size of open models to get bigger and bigger. In some ways, this disillusionment is a great way to zoom our into the big picture. These models are coming. These models are getting cheaper. We need to think about risks and infrastructure more than open vs. closed.
00:00 Introduction
01:16 Recent developments in open model releases
04:21 Tom's experience viewing the total solar eclipse
09:38 The Three-Body Problem book and Netflix
14:06 The Gartner Hype Cycle
22:51 Infrastructure constraints on scaling AI
28:47 Metaphors and narratives around AI risk
34:43 Rethinking AI risk as public health problems
37:37 The "one-way door" nature of releasing open model weights
44:04 The relationship between the AI ecosystem and the models
48:24 Wrapping up the discussion in the "trough of disillusionment"
We've got some links for you again:
- Gartner hype cycle https://en.wikipedia.org/wiki/Gartner_hype_cycle
- MSFT Supercomputer https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer
- Safety is about systems https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property
- Earth day history https://www.earthday.org/history/
- For our loyal listeners http://tudorsbiscuitworld.com/
The podcast currently has 34 episodes available.
958 Listeners
444 Listeners
291 Listeners
288 Listeners
178 Listeners
199 Listeners
86 Listeners
207 Listeners
327 Listeners
109 Listeners
68 Listeners
156 Listeners
49 Listeners
332 Listeners
61 Listeners