Share Training Data
Share to email
Share to Facebook
Share to X
By Sequoia Capital
4.7
2424 ratings
The podcast currently has 22 episodes available.
Can GenAI allow us to connect our imagination to what we see on our screens? Decart’s Dean Leitersdorf believes it can.
In this episode, Dean Leitersdorf breaks down how Decart is pushing the boundaries of compute in order to create AI-generated consumer experiences, from fully playable video games to immersive worlds. From achieving real-time video inference on existing hardware to building a fully vertically integrated stack, Dean explains why solving fundamental limitations rather than specific problems could lead to the next trillion-dollar company.
Hosted by: Sonya Huang and Shaun Maguire, Sequoia Capital
00:00 Introduction
03:22 About Oasis
05:25 Solving a problem vs overcoming a limitation
08:42 The role of game engines
11:15 How video real-time inference works
14:10 World model vs pixel representation
17:17 Vertical integration
34:20 Building a moat
41:35 The future of consumer entertainment
43:17 Rapid fire questions
Years before co-founding Glean, Arvind was an early Google employee who helped design the search algorithm. Today, Glean is building search and work assistants inside the enterprise, which is arguably an even harder problem. One of the reasons enterprise search is so difficult is that each individual at the company has different permissions and access to different documents and information, meaning that every search needs to be fully personalized. Solving this difficult ingestion and ranking problem also unlocks a key problem for AI: feeding the right context into LLMs to make them useful for your enterprise context. Arvind and his team are harnessing generative AI to synthesize, make connections, and turbo-change knowledge work. Hear Arvind’s vision for what kind of work we’ll do when work AI assistants reach their potential.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
00:00 - Introduction
08:35 - Search rankings
11:30 - Retrieval-Augmented Generation
15:52 - Where enterprise search meets RAG
19:13 - How is Glean changing work?
26:08 - Agentic reasoning
31:18 - Act 2: application platform
33:36 - Developers building on Glean
35:54 - 5 years into the future
38:48 - Advice for founders
In recent years there’s been an influx of theoretical physicists into the leading AI labs. Do they have unique capabilities suited to studying large models or is it just herd behavior? To find out, we talked to our former AI Fellow (and now OpenAI researcher) Dan Roberts.
Roberts, co-author of The Principles of Deep Learning Theory, is at the forefront of research that applies the tools of theoretical physics to another type of large complex system, deep neural networks. Dan believes that DLLs, and eventually LLMs, are interpretable in the same way a large collection of atoms is—at the system level. He also thinks that emphasis on scaling laws will balance with new ideas and architectures over time as scaling asymptotes economically.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
Mentioned in this episode:
AI Math Olympiad: Dan is on the prize committee
NotebookLM from Google Labs has become the breakout viral AI product of the year. The feature that catapulted it to viral fame is Audio Overview, which generates eerily realistic two-host podcast audio from any input you upload—written doc, audio or video file, or even a PDF. But to describe NotebookLM as a “podcast generator” is to vastly undersell it. The real magic of the product is in offering multi-modal dimensions to explore your own content in new ways—with context that’s surprisingly additive. 200-page training manuals become synthesized into digestible chapters, turned into a 10-minute podcast—or both—and shared with the sales team, just to cite one example. Raiza Martin and Jason Speilman join us to discuss how the magic happens, and what’s next for source-grounded AI.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
All of us as consumers have felt the magic of ChatGPT—but also the occasional errors and hallucinations that make off-the-shelf language models problematic for business use cases with no tolerance for errors. Case in point: A model deployed to help create a summary for this episode stated that Sridhar Ramaswamy previously led PyTorch at Meta. He did not. He spent years running Google’s ads business and now serves as CEO of Snowflake, which he describes as the data cloud for the AI era.
Ramaswamy discusses how smart systems design helped Snowflake create reliable "talk-to-your-data" applications with over 90% accuracy, compared to around 45% for out-of-the-box solutions using off the shelf LLMs. He describes Snowflake's commitment to making reliable AI simple for their customers, turning complex software engineering projects into straightforward tasks.
Finally, he stresses that even as frontier models progress, there is significant value to be unlocked from current models by applying them more effectively across various domains.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
Mentioned in this episode:
Cortex Analyst: Snowflake’s talk-to-your-data API
Document AI: Snowflake feature that extracts in structured information from documents
Combining LLMs with AlphaGo-style deep reinforcement learning has been a holy grail for many leading AI labs, and with o1 (aka Strawberry) we are seeing the most general merging of the two modes to date. o1 is admittedly better at math than essay writing, but it has already achieved SOTA on a number of math, coding and reasoning benchmarks.
Deep RL legend and now OpenAI researcher Noam Brown and teammates Ilge Akkaya and Hunter Lightman discuss the ah-ha moments on the way to the release of o1, how it uses chains of thought and backtracking to think through problems, the discovery of strong test-time compute scaling laws and what to expect as the model gets better.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
Mentioned in this episode:
00:00 - Introduction
01:33 - Conviction in o1
04:24 - How o1 works
05:04 - What is reasoning?
07:02 - Lessons from gameplay
09:14 - Generation vs verification
10:31 - What is surprising about o1 so far
11:37 - The trough of disillusionment
14:03 - Applying deep RL
14:45 - o1’s AlphaGo moment?
17:38 - A-ha moments
21:10 - Why is o1 good at STEM?
24:10 - Capabilities vs usefulness
25:29 - Defining AGI
26:13 - The importance of reasoning
28:39 - Chain of thought
30:41 - Implication of inference-time scaling laws
35:10 - Bottlenecks to scaling test-time compute
38:46 - Biggest misunderstanding about o1?
41:13 - o1-mini
42:15 - How should founders think about o1?
Adding code to LLM training data is a known method of improving a model’s reasoning skills. But wouldn’t math, the basis of all reasoning, be even better? Up until recently, there just wasn’t enough usable data that describes mathematics to make this feasible.
A few years ago, Vlad Tenev (also founder of Robinhood) and Tudor Achim noticed the rise of the community around an esoteric programming language called Lean that was gaining traction among mathematicians. The combination of that and the past decade’s rise of autoregressive models capable of fast, flexible learning made them think the time was now and they founded Harmonic. Their mission is both lofty—mathematical superintelligence—and imminently practical, verifying all safety-critical software.
Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
Mentioned in this episode:
00:00 - Introduction
01:42 - Math is reasoning
06:16 - Studying with the world's greatest living mathematician
10:18 - What does the math community think of AI math?
15:11 - Recursive self-improvement
18:31 - What is Lean?
21:05 - Why now?
22:46 - Synthetic data is the fuel for the model
27:29 - How fast will your model get better?
29:45 - Exploring the frontiers of human knowledge
34:11 - Lightning round
AI researcher Jim Fan has had a charmed career. He was OpenAI’s first intern before he did his PhD at Stanford with “godmother of AI,” Fei-Fei Li. He graduated into a research scientist position at Nvidia and now leads its Embodied AI “GEAR” group. The lab’s current work spans foundation models for humanoid robots to agents for virtual worlds.
Jim describes a three-pronged data strategy for robotics, combining internet-scale data, simulation data and real world robot data. He believes that in the next few years it will be possible to create a “foundation agent” that can generalize across skills, embodiments and realities—both physical and virtual. He also supports Jensen Huang’s idea that “Everything that moves will eventually be autonomous.”
Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital
Mentioned in this episode:
00:00 Introduction
01:35 Jim’s journey to embodied intelligence
04:53 The GEAR Group
07:32 Three kinds of data for robotics
10:32 A GPT-3 moment for robotics
16:05 Choosing the humanoid robot form factor
19:37 Specialized generalists
21:59 GR00T gets its own chip
23:35 Eureka and Issac Sim
25:23 Why now for robotics?
28:53 Exploring virtual worlds
36:28 Implications for games
39:13 Is the virtual world in service of the physical world?
42:10 Alternative architectures to Transformers
44:15 Lightning round
There’s a new archetype in Silicon Valley, the AI researcher turned founder. Instead of tinkering in a garage they write papers that earn them the right to collaborate with cutting-edge labs until they break out and start their own.
This is the story of wunderkind Eric Steinberger, the founder and CEO of Magic.dev. Eric came to programming through his obsession with AI and caught the attention of DeepMind researchers as a high school student. In 2022 he realized that AGI was closer than he had previously thought and started Magic to automate the software engineering necessary to get there. Among his counterintuitive ideas are the need to train proprietary large models, that value will not accrue in the application layer and that the best agents will manage themselves. Eric also talks about Magic’s recent 100M token context window model and the HashHop eval they’re open sourcing.
Hosted by: Sonya Huang, Sequoia Capital
Mentioned in this episode:
00:00 - Introduction
01:39 - Vienna-born wunderkind
04:56 - Working with Noam Brown
8:00 - “I can do two things. I cannot do three.”
10:37 - AGI to-do list
13:27 - Advice for young researchers
20:35 - Reading every paper voraciously
23:06 - The army of Noams
26:46 - The leaps still needed in research
29:59 - What is Magic?
36:12 - Competing against the 800-pound gorillas
38:21 - Ideal team size for researchers
40:10 - AI that feels like a colleague
44:30 - Lightning round
47:50 - Bonus round: 200M token context announcement
On Training Data, we learn from innovators pushing forward the frontier of AI’s capabilities. Today we’re bringing you something different. It’s the story of a company currently implementing AI at scale in the enterprise, and how it was built from a bootstrapped idea in the pre-AI era to a 150 billion dollar market cap giant.
It’s the Season 2 premiere of Sequoia’s other podcast, Crucible Moments, where we hear from the founders and leaders of some legendary companies about the crossroads and inflection points that shaped their journeys. In this episode, you’ll hear from Fred Luddy and Frank Slootman about building and scaling ServiceNow. Listen to Crucible Moments wherever you get your podcasts or go to:
Spotify: https://open.spotify.com/show/40bWCUSan0boCn0GZJNpPn
Apple: https://podcasts.apple.com/us/podcast/crucible-moments/id1705282398
Hosted by: Roelof Botha, Sequoia Capital
Transcript: https://www.sequoiacap.com/podcast/crucible-moments-servicenow/
The podcast currently has 22 episodes available.
1,258 Listeners
973 Listeners
511 Listeners
2,279 Listeners
170 Listeners
86 Listeners
208 Listeners
185 Listeners
114 Listeners
51 Listeners
86 Listeners
115 Listeners
365 Listeners
19 Listeners
13 Listeners