Share AI + a16z
Share to email
Share to Facebook
Share to X
By a16z
4.9
1414 ratings
The podcast currently has 27 episodes available.
Longtime machine-learning researcher, and University of Washington Professor Emeritus, Pedro Domingos joins a16z General Partner Martin Casado to discuss the state of artificial intelligence, whether we're really on a path toward AGI, and the value of expressing unpopular opinions. It's a very insightful discussion as we head into an era of mainstream AI adoption, and ask big questions about how to ramp up progress and diversify research directions.
Here's an excerpt of Pedro sharing his thoughts on the increasing cost of frontier models and whether that's the right direction:
"if you believe the scaling laws hold and the scaling laws will take us to human-level intelligence, then, hey, it's worth a lot of investment. That's one part, but that may be wrong. The other part, however, is that to do that, we need exploding amounts of compute.
"If if I had to predict what's going to happen, it's that we do not need a trillion dollars to reach AGI at all. So if you spend a trillion dollars reaching AGI, this is a very bad investment."
Learn more:
The Master Algorithm
2040: A Silicon Valley Satire
The Economic Case for Generative AI and Foundation Models
Follow everyone on Z:
Pedro Domingos
Martin Casado
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode of AI + a16z, General Partner Anjney Midha shares his perspective on the recent collection of Nobel Prizes awarded to AI researchers in both Physics and Chemistry. He talks through how early work on neural networks in the 1980s spurred continuous advancement in the field — even through the "AI winter" — which resulted in today's extremely useful AI technologies.
Here's a sample of the discussion, in response to a question about whether we will see more high-quality research emerge from sources beyond large universities and commercial labs:
"It can be easy to conclude that the most impactful AI research still requires resources beyond the reach of most individuals or small teams. And that open source contributions, while valuable, are unlikely to match the breakthroughs from well-funded labs. I've even heard heard some dismissive folks call it cute, and undermine the value of those.
"But on the other hand, I think that you could argue that open source and individual contributions are becoming increasingly more important in AI development. I think that the democratization of AI will lead probably to more diverse and innovative applications. And I think, in particular, the reason we should expect an explosion in home scientists — folks who aren't necessarily affiliated with a top-tier academic, or for that matter, industry lab — is that as open source models get more and more accessible, the rate limiter really is on the creativity of somebody who's willing to apply the power of that model's computational ability to a novel domain. And there are just a ton of domains and combinatorial intersections of different disciplines.
"Our blind spot for traditional academia [is that] it's not particularly rewarding to veer off the publish-or-perish conference circuit. And if you're at a large industry lab and you're not contributing directly to the next model release, it's not that clear how you get rewarded. And so being an independent actually frees you up from the incentive misstructure, I think, of some of the larger labs. And if you get to leverage the millions of dollars that the Llama team spent on pre-training, applying it to data sets that nobody else has perused before, it results in pretty big breakthroughs."
Learn more:
They trained artificial neural networks using physics
They cracked the code for proteins’ amazing structures
Notable AI models by year
Follow on X:
Anjney Midha
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode of AI + a16z, General Partner Anjney Midha explains the forces that lead to GPU shortages and price spikes, and how the firm mitigates these concerns for portfolio companies by supplying them with the GPUs they need through a program called Oxygen. The TL;DR version of the problem is that competition for GPU access favors large incumbents who can afford to outbid startups and commit to long contracts; when startups do buy or rent in bulk, they can be stuck with lots of GPUs and — absent training runs or ample customer demand for inference workloads — nothing to do with them.
Here is an excerpt of Anjney explaining how training versus inference workloads affect what level of resources a company needs at any given time:
"It comes down to whether the customer that's using them . . . has a use that can really optimize the efficiency of those chips. As an example, if you happen to be an image model company or a video model company and you put a long-term contract on H100s this year, and you trained and put out a really good model and a product that a lot of people want to use, even though you're not training on the best and latest cluster next year, that's OK. Because you can essentially swap out your training workloads for your inference workloads on those H100s.
"The H100s are actually incredibly powerful chips that you can run really good inference workloads on. So as long as you have customers who want to run inference of your model on your infrastructure, then you can just redirect that capacity to them and then buy new [Nvidia] Blackwells for your training runs.
"Who it becomes really tricky for is people who bought a bunch, don't have demand from their customers for inference, and therefore are stuck doing training runs on that last-generation hardware. That's a tough place to be."
Learn more:
Navigating the High Cost of GPU Compute
Chasing Silicon: The Race for GPUs
Remaking the UI for AI
Follow on X:
Anjney Midha
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode of AI + a16z, Bowen Peng and Jeffrey Quesnelle of Nous Research join a16z General Partner Anjney Midha to discuss their mission to keep open source AI research alive and activate the community of independent builders. The focus is on a recent project called DisTrO, which demonstrates it's possible to train AI models across the public internet much faster than previously thought possible. However, Nous is behind a number of other successful open source AI projects, including the popular Hermes family of "neutral" and guardrail-free language models.
Here's an excerpt of Jeffrey explaining how DisTrO was inspired by the possibility that major open source AI providers could turn their efforts back inward:
"What if we don't get Llama 4? That's like an actual existential threat because the closed providers will continue to get better and we would be dead in the water, in a sense.
"So we asked, 'Is there any real reason we can't make Llama 4 ourselves?' And there is a real reason, which is that we don't have 20,000 H100s. . . . God willing and the creek don't rise, maybe we will one day, but we don't have that right now.
"So we said, 'But what do we have?' We have a giant activated community who's passionate about wanting to do this and would be willing to contribute their GPUs, their power, to it, if only they could . . . but we don't have the ability to activate that willingness into actual action. . . . The only way people are connected is over the internet, and so anything that isn't sharing over the internet is not gonna work.
"And so that was the initial premise: What if we don't get Llama 4? And then, what do we have that we could use to create Llama 4? And, if we can't, what are the technical problems that, if only we slayed that one technical problem, the dam of our community can now flow and actually solve the problem?"
Learn more:
DisTrO paper
Nous Research
Nous Research GitHub
Follow everyone on X:
Bowen Peng
Jeffrey Quesnelle
Anjney Midha
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode of AI + a16z, Ambience cofounder and chief scientist Nikhil Buduma joins Derrick Harris to discuss the nuances of using AI models to build vertical applications (including in his space, health care), and why industry acumen is at least as important as technical expertise. Nikhil also shares his experience of having a first-row seat to key advances in AI — including the transformer architecture — which not only allowed his company to be an early adopter, but also gave him insight into the types of problems that AI could solve in the future.
Here's an excerpt of Nikhil explaining the importance of understanding your buyer:
"If you believe that the most valuable companies are going to fall out of some level of vertical integration between the app layer and the model layer, [that] this next generation of incredibly valuable companies is going to be built by founders who've spent years just obsessively becoming experts in an industry, I would recommend that someone actually know how to map out the most valuable use cases and have a clear story for how those use cases have synergistic, compounding value when you solve those problems increasingly in concert together.
"I think the founding team is going to have to have the right ML chops to actually build out the right live learning loops, build out the ML ops loops to measure and to close the gap on model quality for those use cases. [But] the model is actually just one part of solving the problem.
"You actually need to be thoughtful about the product, the design, the delivery competencies to make sure that what you build is integrated with the right sources of the enterprise data that fits into the right workflows in the right way. And you're going to have to invest heavily in the change management to make sure that customers realize the full value of what they're buying from you. That's all actually way more important than people realize."
Learn more:
Fundamentals of Deep Learning
Follow everyone on X:
Nikhil Buduma
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode of AI + a16z, a16z General Partner Jennifer Li joins MotherDuck Cofounder and CEO Jordan Tigani to discuss DuckDB's spiking popularity as the era of big data wanes, as well as the applicability of SQL-based systems for AI workloads and the prospect of text-to-SQL for analyzing data.
Here's an excerpt of Jordan discussing an early win when it comes to applying generative AI to data analysis:
"Everybody forgets syntax for various SQL calls. And it's just like in coding. So there's some people that memorize . . . all of the code base, and so they don't need auto-complete. They don't need any copilot. . . . They don't need an ID; they can just type in Notepad. But for the rest of us, I think these tools are super useful. And I think we have seen that these tools have already changed how people are interacting with their data, how they're writing their SQL queries.
"One of the things that we've done . . . is we focused on improving the experience of writing queries. Something we found is actually really useful is when somebody runs a query and there's an error, we basically feed the line of the error into GPT 4 and ask it to fix it. And it turns out to be really good.
". . . It's a great way of letting you stay in the flow of writing your queries and having true interactivity."
Learn more:
Small Data SF conference
DuckDB
Follow everyone on X:
Jordan Tigani
Jennifer Li
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode of the AI + a16z podcast, Black Forest Labs founders Robin Rombach, Andreas Blattmann, and Patrick Esser sit down with a16z general partner Anjney Midha to discuss their journey from PhD researchers to Stability AI, and now to launching their own company building state-of-the-art image and video models. They also delve into the topic of openness in AI, explaining the benefits of releasing open models and sharing research findings with the field.
Learn more:
Flux
Keep the code to AI open, say two entrepreneurs
Follow everyone on X:
Robin Rombach
Andreas Blattmann
Patrick Esser
Anjney Midha
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode, a16z General Partner Vijay Pande walks us through the past two decades of applying software engineering to the life sciences — from the Folding@Home project that he launched, through AlphaFold and more. He also discusses the major opportunities for AI to transform medicine and health care, as well as some pitfalls that founders in that space need to watch out for.
Here's an excerpt of Vijay discussing how AlphaFold and other projects revolutionized biology research not just because of their algorithms, but because of how they introduced software engineering into the field:
"I think the key thing about AlphaFold that really got people excited was not just the AI part, because people have been using machine learning. And so that part was there. I think it was how fast, at least to me, an engineering approach could make a big jump in this field. Because this was a field largely addressed by academics, and academics would have a lab of maybe 20 [or] 30 people — some of the bigger ones, maybe slightly bigger. And of that, these are graduate students working on their PhDs. It's very different than having a team of professional programmers and engineers going after the problem.
"And so that jump in team ability, plus the technology, I think was very critical for the jump in results. And also, finally, I think having a company like Google say, 'You know, this is a problem we're excited about and we're interested in,' and that AI and biology is something that is an area of great interest to them . . . was a huge flag to plant."
Learn more:
a16z Bio + Health
Folding@Home
AlphaFold
Raising Health podcast
Follow everyone on X:
Vijay Pande
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode of the AI + a16z podcast, a16z General Partner Anjney Midha speaks with PromptFoo founder and CEO Ian Webster about the importance of red-teaming for AI safety and security, and how bringing those capabilities to more organizations will lead to safer, more predictable generative AI applications. They also delve into lessons they learned about this during their time together as early large language model adopters at Discord, and why attempts to regulate AI should focus on applications and use cases rather than models themselves.
Here's an excerpt of Ian laying out his take on AI governance:
"The reason why I think that the future of AI safety is open source is that I think there's been a lot of high-level discussion about what AI safety is, and some of the existential threats, and all of these scenarios. But what I'm really hoping to do is focus the conversation on the here and now. Like, what are the harms and the safety and security issues that we see in the wild right now with AI? And the reality is that there's a very large set of practical security considerations that we should be thinking about.
"And the reason why I think that open source is really important here is because you have the large AI labs, which have the resources to employ specialized red teams and start to find these problems, but there are only, let's say, five big AI labs that are doing this. And the rest of us are left in the dark. So I think that it's not acceptable to just have safety in the domain of the foundation model labs, because I don't think that's an effective way to solve the real problems that we see today.
"So my stance here is that we really need open source solutions that are available to all developers and all companies and enterprises to identify and eliminate a lot of these real safety issues."
Learn more:
Securing the Black Box: OpenAI, Anthropic, and GDM Discuss
Security Founders Talk Shop About Generative AI
California's Senate Bill 1047: What You Need to Know
Follow everybody on X:
Ian Webster
Anjney Midha
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
In this episode of the AI + a16z podcast, Command Zero cofounder and CTO Dean de Beer joins a16z's Joel de la Garza and Derrick Harris to discuss the benefits of training large language models on security data, as well as the myriad factors product teams need to consider when building on LLMs.
Here's an excerpt of Dean discussing the challenges and concerns around scaling up LLMs:
"Scaling out infrastructure has a lot of limitations: the APIs you're using, tokens, inbound and outbound, the cost associated with that — the nuances of the models, if you will. And not all models are created equal, and they oftentimes are very good for specific use cases and they might not be appropriate for your use case, which is why we tend to use a lot of different models for our use cases . . .
"So your use cases will heavily determine the models that you're going to use. Very quickly, you'll find that you'll be spending more time on the adjacent technologies or infrastructure. So, memory management for models. How do you go beyond the context window for a model? How do you maintain the context of the data, when given back to the model? How do you do entity extraction so that the model understands that there are certain entities that it needs to prioritize when looking at new data? How do you leverage semantic search as something to augment the capabilities of the model and the data that you're ingesting?
"That's where we have found that we spend a lot more of our time today than on the models themselves. We have found a good combination of models that run our use cases; we augment them with those adjacent technologies."
Learn more:
The Cuckoo's Egg
1995 Citigroup hack
Follow everyone on social media:
Dean de Beer
Joel de la Garza
Derrick Harris
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
The podcast currently has 27 episodes available.
1,257 Listeners
973 Listeners
512 Listeners
120 Listeners
430 Listeners
170 Listeners
208 Listeners
144 Listeners
56 Listeners
115 Listeners
71 Listeners
51 Listeners
115 Listeners
362 Listeners
29 Listeners