... more
Share Interconnects
Share to email
Share to Facebook
Share to X
By Nathan Lambert
3.9
77 ratings
The podcast currently has 71 episodes available.
Full post:
https://www.interconnects.ai/p/olmo-2-and-building-language-model-training
OLMo 2 demo: https://playground.allenai.org/
OLMo 2 artifacts: https://huggingface.co/collections/allenai/olmo-2-674117b93ab84e98afc72edc
Chapters
00:00 Building AI Teams
06:35 OLMo 2
Figures
Fig 1, pretrain plot: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/olmo2/pretrain.webp
Fig 2, pretrain table: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/olmo2/pretrain-table.webp
Fig 3, post-train table: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/olmo2/postrain-table.webp
Original post: https://www.interconnects.ai/p/tulu-3
Chapters
00:00 History
05:44 Technical details sneak peak
Figures
Fig 1, results: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/tulu3-img/results.webp
Fig 2, overview: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/tulu3-img/overview.webp
Fig 3, preferences: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/tulu3-img/preferences.webp
Fig 4, RLVR: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/tulu3-img/rlvr.webp
Original post: https://www.interconnects.ai/p/scaling-realities
Original post: https://www.interconnects.ai/p/saving-the-nairr
Chapters
05:26: Do we need an AI research resource or an LM research resource?
08:59: Policy roundups
Tim Dettmers does not need an introduction for most people building open-source AI. If you are part of that minority, you’re in for a treat. Tim is the lead developer behind most of the open-source tools for quantization: QLoRA, bitsandbytes, 4 and 8 bit inference, and plenty more. He recently finished his Ph.D. at the University of Washington, is now a researcher at the Allen Institute for AI, and is starting as a professor at Carnegie Mellon University in fall of 2025.
Tim is a joy to talk to. He thinks independently on all the AI issues of today, bringing new perspectives that challenge the status quo. At the same time, he’s sincere and very helpful to work with, working hard to uplift those around him and the academic community. There’s a reason he’s so loved in the open-source AI community.
Find more about Tim on his Twitter or Google Scholar. He also has a great blog where he talks about things like which GPUs to buy and which grad school to choose.
Listen on Apple Podcasts, Spotify, YouTube, and where ever you get your podcasts. For other Interconnects interviews, go here.
Show Notes
Here's a markdown list of companies, people, projects, research papers, and other key named entities mentioned in the transcript:
* QLoRA
* Bits and Bytes
* Llama 3
* Apple Intelligence
* SWE Bench
* RewardBench
* Claude (AI assistant by Anthropic)
* Transformers (Hugging Face library)
* Gemma (Google's open weight language model)
* Notebook LM
* LangChain
* LangGraph
* Weights & Biases
* Blackwell (NVIDIA GPU architecture)
* Perplexity
* Branch Train Merge (research paper)
* "ResNets do iterative refinement on features" (research paper)
* CIFAR-10 and CIFAR-100 (computer vision datasets)
* Lottery Ticket Hypothesis (research paper)
* OpenAI O1
* TRL (Transformer Reinforcement Learning) by Hugging Face
* Tim's work on quantization (this is just one example)
Timestamps
* [00:00:00] Introduction and background on Tim Dettmers
* [00:01:53] Future of open source AI models
* [00:09:44] SWE Bench and evaluating AI systems
* [00:13:33] Using AI for coding, writing, and thinking
* [00:16:09] Academic research with limited compute
* [00:32:13] Economic impact of AI
* [00:36:49] User experience with different AI models
* [00:39:42] O1 models and reasoning in AI
* [00:46:27] Instruction tuning vs. RLHF and synthetic data
* [00:51:16] Model merging and optimization landscapes
* [00:55:08] Knowledge distillation and optimization dynamics
* [01:01:55] State-space models and transformer dominance
* [01:06:00] Definition and future of AI agents
* [01:09:20] The limit of quantization
Transcript and full details: https://www.interconnects.ai/p/tim-dettmers
Get Interconnects (https://www.interconnects.ai/)...
... on YouTube: https://www.youtube.com/@interconnects
... on Twitter: https://x.com/interconnectsai
... on Linkedin: https://www.linkedin.com/company/interconnects-ai
... on Spotify: https://open.spotify.com/show/2UE6s7wZC4kiXYOnWRuxGv
… on Apple Podcasts: https://podcasts.apple.com/us/podcast/interconnects/id1719552353
Andrew Carr is co-founder and chief scientist at Cartwheel, where he is building text-to-motion AI models and products for gaming, film, and other creative endeavors. We discuss how to keep generative AI fun and expansive — niche powerful use-cases, AI poetry, AI devices like Meta RayBans, generalization to new domains like robotics, and building successful AI research cultures.
Andrew is one of my well read friends on the directions AI is going, so it is great to bring him in for an official conversation. He spent time at OpenAI working on Codex, Gretel AI, and is an editor for the TLDR AI Newsletter.
Listen on Apple Podcasts, Spotify, YouTube, and where ever you get your podcasts. For other Interconnects interviews, go here.
Show Notes
Named entities and papers mentioned in the podcast transcript:
* Codex and GitHub Copilot
* Gretel AI
* TLDR AI Newsletter
* Claude Computer Use
* Blender 3D simulator
* Common Sense Machines
* HuggingFace Simulate, Unity, Godot
* Runway ML
* Mark Chen, OpenAI Frontiers Team Lead
* Meta’s Lingua, Spirit LM, torchtitan and torchchat
* Self-Rewarding Language Models paper
* Meta Movie Gen paper
Timestamps
* [00:00] Introduction to Andrew and Cartwheel
* [07:00] Differences between Cartwheel and robotic foundation models
* [13:33] Claude computer use
* [18:45] Supervision and creativity in AI-generated content
* [23:26] Adept AI and challenges in building AI agents
* [30:56] Successful AI research culture at OpenAI and elsewhere
* [38:00] Keeping up with AI research
* [44:36] Meta Ray-Ban smart glasses and AI assistants
* [51:17] Meta's strategy with Llama and open source AI
Transcript & Full Show Notes: https://www.interconnects.ai/p/interviewing-andrew-carr
Full post:
https://www.interconnects.ai/p/why-i-build-open-language-models
How Claude's computer use works. Where OpenAI, Anthropic, and Google all have a lead on eachother.
Original post: https://www.interconnects.ai/p/claudes-agency
Chapters
00:00 Claude's agentic future and the current state of the frontier models
04:43 The state of the frontier models
04:49 1. Anthropic has the best model we are accustomed to using
05:27 Google has the best small & cheap model for building automation and basic AI engineering
08:07 OpenAI has the best model for reasoning, but we don’t know how to use it
09:12 All of the laboratories have much larger models they’re figuring out how to release (and use)
10:42 Who wins?
Figures
Fig 1, Sonnet New Benchmarks: https://substackcdn.com/image/fetch/w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5d2e63ff-ac9f-4f8e-9749-9ef2b9b25b6c_1290x1290.png
Fig 2, Sonnet Old Benchmarks: https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4bccbd4d-f1c8-4a38-a474-69a3df8a4448_2048x1763.png
Get Interconnects (https://www.interconnects.ai/)...
... on YouTube: https://www.youtube.com/@interconnects
... on Twitter: https://x.com/interconnectsai
... on Linkedin: https://www.linkedin.com/company/interconnects-ai
... on Spotify: https://open.spotify.com/show/2UE6s7wZC4kiXYOnWRuxGv
… on Apple Podcasts: https://podcasts.apple.com/us/podcast/interconnects/id1719552353
Arvind Narayanan is a leading voice disambiguating what AI does and does not do. His work, with Sayash Kapoor at AI Snake Oil, is one of the few beacons of reasons in a AI media ecosystem with quite a few bad Apples. Arvind is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. You can learn more about Arvind and his work on his website, X, or Google Scholar.
This episode is all in on figuring out what current LLMs do and don’t do. We cover AGI, agents, scaling laws, autonomous scientists, and past failings of AI (i.e. those that came before generative AI took off). We also briefly touch on how all of this informs AI policy, and what academics can do to decide on what to work on to generate better outcomes for technology.
Transcript and full show notes: https://www.interconnects.ai/p/interviewing-arvind-narayanan
Chapters
* [00:00:00] Introduction
* [00:01:54] Balancing being an AI critic while recognizing AI's potential
* [00:04:57] Challenges in AI policy discussions
* [00:08:47] Open source foundation models and their risks
* [00:15:35] Personal use cases for generative AI
* [00:22:19] CORE-Bench and evaluating AI scientists
* [00:25:35] Agents and artificial general intelligence (AGI)
* [00:33:12] Scaling laws and AI progress
* [00:37:41] Applications of AI outside of tech
* [00:39:10] Career lessons in technology and AI research
* [00:41:33] Privacy concerns and AI
* [00:47:06] Legal threats and responsible research communication
* [00:50:01] Balancing scientific research and public distribution
Get Interconnects (https://www.interconnects.ai/podcast)...
... on YouTube: https://www.youtube.com/@interconnects
... on Twitter: https://x.com/interconnectsai
... on Linkedin: https://www.linkedin.com/company/interconnects-ai
... on Spotify: https://open.spotify.com/show/2UE6s7wZC4kiXYOnWRuxGv
Read the full post here: https://www.interconnects.ai/p/building-on-evaluation-quicksand
Chapters
00:00 Building on evaluation quicksand
01:26 The causes of closed evaluation silos
06:35 The challenge facing open evaluation tools
10:47 Frontiers in evaluation
11:32 New types of synthetic data contamination
13:57 Building harder evaluations
Figures
Fig 1: https://huggingface.co/datasets/natolambert/interconnects-figures/resolve/main/manual/openai-predictions.webp
The podcast currently has 71 episodes available.
980 Listeners
426 Listeners
290 Listeners
283 Listeners
174 Listeners
237 Listeners
85 Listeners
209 Listeners
59 Listeners
185 Listeners
113 Listeners
169 Listeners
51 Listeners
342 Listeners
28 Listeners