London Futurists

Against pausing AI research, with Pedro Domingos


Listen Later

Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?

Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?

Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".

That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.

Follow-up reading:
https://homes.cs.washington.edu/~pedrod/
https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543
https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Topics addressed in this episode include:

*) The five tribes of AI research - why there's a lot more to AI than deep learning
*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence
*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example
*) A wide spread of estimates of the timescale to reach AGI
*) Different views as to the true risks from advanced AI
*) The case that risks arise from AI incompetence rather than from increased AI competence
*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI
*) The case for using AI to prevent misuse of AI
*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective
*) How AIs that operate beyond our understanding could still remain under human control
*) How fully can evolution be trusted to produce outputs in line with a specified objective function?
*) The example of humans taming wolves into dogs that pose no threat to us
*) The counterexample of humans pursuing goals contrary to our in-built genetic drives
*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes
*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous
*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models
*) Pros and cons of escalating a sense of urgency
*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?
*) The need for a more rational discussion of the issues raised by increasingly competent AIs

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

How Hacks Happen

Hacks, scams, cyber crimes, and other shenanigans explored and explained. Presented...

Listen on: Apple Podcasts   Spotify

...more
View all episodesView all episodes
Download on the App Store

London FuturistsBy London Futurists

  • 4.7
  • 4.7
  • 4.7
  • 4.7
  • 4.7

4.7

9 ratings


More shows like London Futurists

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,007 Listeners

Philosophize This! by Stephen West

Philosophize This!

15,241 Listeners

More or Less by BBC Radio 4

More or Less

887 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,320 Listeners

Uncanny Valley | WIRED by WIRED

Uncanny Valley | WIRED

503 Listeners

Team Human by Douglas Rushkoff

Team Human

368 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,178 Listeners

Everything Electric Podcast by The Fully Charged Show

Everything Electric Podcast

318 Listeners

Google DeepMind: The Podcast by Hannah Fry

Google DeepMind: The Podcast

201 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

512 Listeners

Hard Fork by The New York Times

Hard Fork

5,510 Listeners

Clearer Thinking with Spencer Greenberg by Spencer Greenberg

Clearer Thinking with Spencer Greenberg

138 Listeners

Moonshots with Peter Diamandis by PHD Ventures

Moonshots with Peter Diamandis

547 Listeners

Inner Cosmos with David Eagleman by iHeartPodcasts

Inner Cosmos with David Eagleman

586 Listeners

Prof G Markets by Vox Media Podcast Network

Prof G Markets

1,425 Listeners