unSILOed with Greg LaBlanc

497. Spotting The Difference Between AI Innovation and AI Snake Oil feat. Arvind Narayanan


Listen Later

Where is the line between fact and fiction in the capabilities of AI? Which predictions or promises about the future of AI are reasonable and which are the creations of hype for the benefit of the industry or the company making outsized claims?

Arvind Narayanan is a professor of computer science at Princeton University, the director of the Center for Information Technology Policy, and an author. His latest book is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.

Greg and Arvind discuss the misconceptions about AI technology, emphasizing the overestimations of AI's capabilities and the importance of understanding predictive versus generative AI. Arvind also points out the ethical and practical issues of deploying AI in fields like criminal justice and HR. Arvind and Greg also explore the challenges of regulation, the historical context of technological hype, and how academia can play a role in shaping AI's future. Arvind also reflects on his previous work on Bitcoin and cryptocurrency technologies and shares insights into the complexities and future of AI and blockchain.

*unSILOed Podcast is produced by University FM.*

Show Links:

Recommended Resources:

  • Deep Learning
  • Generative Artificial Intelligence
  • AISnakeOil.com | Newsletter
  • Bitcoin and Cryptocurrency Technologies | Princeton/Coursera Course

Guest Profile:

  • Faculty Profile at Princeton University
  • LinkedIn Profile
  • Wikipedia Page

His Work:

  • Amazon Author Page
  • AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
  • Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction
  • Fairness and Machine Learning: Limitations and Opportunities
  • Google Scholar Page
Episode Quotes:

What can the AI community learn from medicine about testing?

28:51: Let's talk about what we can learn from medicine and what maybe we shouldn't take from them. I think that the community internalized a long time ago that the hard part of innovation is not the building, but the testing. And the AI community needs to learn that. Traditionally, in machine learning, the building was the hard part, and everybody would evaluate on the same few sets of benchmarks. And that was okay because they were mostly solving toy problems as they were building up the complexities of these technologies. Now, we're building AI systems that need to do things in the real world. And the building, especially with foundation models, you build once and apply it to a lot of different things. Right? That has gotten a lot easier—not necessarily easier in terms of technical skills, but in terms of the relative amount of investment you need to put into that, as opposed to the testing—because now you have to test foundation models in a legal setting, medical setting, [and] hundreds of other settings. So that, I think, is one big lesson.

Replacing broken systems with AI can escalate the problem

08:36: Just because one system is broken doesn't mean that we should replace it with another broken system instead of trying to do the hard work of thinking about how to fix the system. And fixing it with AI is not even working because, in the hiring scenario, what's happening is that candidates are now turning to AI to apply to hundreds of positions at once. And it's clearly not solving the problem; it's only escalating the arms race. And it might be true that human decision-makers are biased; they're not very accurate. But at least, when you have a human in the loop, you're forced to confront this shittiness of the situation, right? You can't put this moral distance between yourself and what's going on, and I think that's one way in which AI could make it worse because it's got this veneer of objectivity and accuracy.

Foundation models lower costs and could shift AI research back to academia

27:22: The rise of foundation models has meant that they've kind of now become a layer on top of which you can build other things, and that is much, much less expensive. Then, building foundation models themselves—especially if it's going to be the case that scaling is going to run out—we don't need to look for AI advances by building 1 billion models and 10 billion models; we can take the existing foundation models for granted and build on top of them. Then, I would expect that a lot of research might move back to academia. Especially the kind of research that might involve offbeat ideas. 


Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

...more
View all episodesView all episodes
Download on the App Store

unSILOed with Greg LaBlancBy Greg La Blanc

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

62 ratings


More shows like unSILOed with Greg LaBlanc

View all
Odd Lots by Bloomberg

Odd Lots

1,890 Listeners

The Knowledge Project by Shane Parrish

The Knowledge Project

2,676 Listeners

The Psychology Podcast by iHeartPodcasts

The Psychology Podcast

1,855 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,365 Listeners

EconTalk by Russ Roberts

EconTalk

4,273 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,437 Listeners

The Good Fight by Yascha Mounk

The Good Fight

902 Listeners

Capitalisn't by University of Chicago Podcast Network

Capitalisn't

541 Listeners

Eye On The Market by Michael Cembalest

Eye On The Market

289 Listeners

The Peter Attia Drive by Peter Attia, MD

The Peter Attia Drive

9,046 Listeners

The Acquirers Podcast by Tobias Carlisle

The Acquirers Podcast

301 Listeners

The Compound and Friends by The Compound

The Compound and Friends

2,113 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

70 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

139 Listeners

Huberman Lab by Scicomm Media

Huberman Lab

29,196 Listeners