unSILOed with Greg LaBlanc

497. Spotting The Difference Between AI Innovation and AI Snake Oil feat. Arvind Narayanan


Listen Later

Where is the line between fact and fiction in the capabilities of AI? Which predictions or promises about the future of AI are reasonable and which are the creations of hype for the benefit of the industry or the company making outsized claims?

Arvind Narayanan is a professor of computer science at Princeton University, the director of the Center for Information Technology Policy, and an author. His latest book is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.

Greg and Arvind discuss the misconceptions about AI technology, emphasizing the overestimations of AI's capabilities and the importance of understanding predictive versus generative AI. Arvind also points out the ethical and practical issues of deploying AI in fields like criminal justice and HR. Arvind and Greg also explore the challenges of regulation, the historical context of technological hype, and how academia can play a role in shaping AI's future. Arvind also reflects on his previous work on Bitcoin and cryptocurrency technologies and shares insights into the complexities and future of AI and blockchain.

*unSILOed Podcast is produced by University FM.*

Show Links:

Recommended Resources:

  • Deep Learning
  • Generative Artificial Intelligence
  • AISnakeOil.com | Newsletter
  • Bitcoin and Cryptocurrency Technologies | Princeton/Coursera Course

Guest Profile:

  • Faculty Profile at Princeton University
  • LinkedIn Profile
  • Wikipedia Page

His Work:

  • Amazon Author Page
  • AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
  • Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction
  • Fairness and Machine Learning: Limitations and Opportunities
  • Google Scholar Page
Episode Quotes:

What can the AI community learn from medicine about testing?

28:51: Let's talk about what we can learn from medicine and what maybe we shouldn't take from them. I think that the community internalized a long time ago that the hard part of innovation is not the building, but the testing. And the AI community needs to learn that. Traditionally, in machine learning, the building was the hard part, and everybody would evaluate on the same few sets of benchmarks. And that was okay because they were mostly solving toy problems as they were building up the complexities of these technologies. Now, we're building AI systems that need to do things in the real world. And the building, especially with foundation models, you build once and apply it to a lot of different things. Right? That has gotten a lot easier—not necessarily easier in terms of technical skills, but in terms of the relative amount of investment you need to put into that, as opposed to the testing—because now you have to test foundation models in a legal setting, medical setting, [and] hundreds of other settings. So that, I think, is one big lesson.

Replacing broken systems with AI can escalate the problem

08:36: Just because one system is broken doesn't mean that we should replace it with another broken system instead of trying to do the hard work of thinking about how to fix the system. And fixing it with AI is not even working because, in the hiring scenario, what's happening is that candidates are now turning to AI to apply to hundreds of positions at once. And it's clearly not solving the problem; it's only escalating the arms race. And it might be true that human decision-makers are biased; they're not very accurate. But at least, when you have a human in the loop, you're forced to confront this shittiness of the situation, right? You can't put this moral distance between yourself and what's going on, and I think that's one way in which AI could make it worse because it's got this veneer of objectivity and accuracy.

Foundation models lower costs and could shift AI research back to academia

27:22: The rise of foundation models has meant that they've kind of now become a layer on top of which you can build other things, and that is much, much less expensive. Then, building foundation models themselves—especially if it's going to be the case that scaling is going to run out—we don't need to look for AI advances by building 1 billion models and 10 billion models; we can take the existing foundation models for granted and build on top of them. Then, I would expect that a lot of research might move back to academia. Especially the kind of research that might involve offbeat ideas. 

...more
View all episodesView all episodes
Download on the App Store

unSILOed with Greg LaBlancBy Greg La Blanc

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

59 ratings


More shows like unSILOed with Greg LaBlanc

View all
EconTalk by Russ Roberts

EconTalk

4,200 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,338 Listeners

a16z Podcast by Andreessen Horowitz

a16z Podcast

995 Listeners

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch by Harry Stebbings

The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch

513 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,382 Listeners

Odd Lots by Bloomberg

Odd Lots

1,733 Listeners

Invest Like the Best with Patrick O'Shaughnessy by Colossus | Investing & Business Podcasts

Invest Like the Best with Patrick O'Shaughnessy

2,291 Listeners

The Joe Walker Podcast by Joe Walker

The Joe Walker Podcast

120 Listeners

GZERO World with Ian Bremmer by GZERO Media

GZERO World with Ian Bremmer

744 Listeners

Eye On The Market by Michael Cembalest

Eye On The Market

269 Listeners

Infinite Loops by Jim O'Shaughnessy

Infinite Loops

172 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

324 Listeners

Catalyst with Shayle Kann by Latitude Media

Catalyst with Shayle Kann

253 Listeners

In Good Company with Nicolai Tangen by Norges Bank Investment Management

In Good Company with Nicolai Tangen

168 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

138 Listeners