LessWrong (30+ Karma)

“Takeoff speeds presentation at Anthropic” by Tom Davidson


Listen Later

Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

This is a lightly edited transcript of a presentation that I (Tom Davidson) gave at Anthropic in September 2023. See also the video recording, or the slides.

None of the content necessarily reflects the views of Anthropic or anyone who works there.

Summary:

  • Software progress – improvements in pre-training algorithms, data quality, prompting strategies, tooling, scaffolding, and all other sources of AI progress other than compute – has been a major driver of AI progress in recent years. I guess it's driven about half of total progress in the last 5 years.
  • When we have “AGI” (=AI that could fully automate AI R&D), the pace of software progress might increase dramatically (e.g. by a factor of ten).
  • Bottlenecks might prevent this – e.g. diminishing returns to finding software innovations, retraining new AI models from scratch, or [...]

---

Outline:

(02:19) Intro

(03:19) Software improvements have been a significant fraction of recent AI progress

(03:40) Efficiency improvements in pre-training algorithms are a significant driver of AI progress

(04:23) Post-training enhancements are significant drivers of AI progress

(05:28) Post-training enhancements can often be developed without significant computational experiments

(06:24) AGI might significantly accelerate the pace of AI software progress

(06:31) AI is beginning to accelerate AI progress.

(07:16) AGI will enable abundant cognitive labour for AI RandD.

(09:28) We dont know how abundant cognitive labour would affect the pace of AI progress

(12:04) AGI might enable 10X faster software progress, which would be very dramatic

(15:20) Bottlenecks might prevent AGI from significantly accelerating software progress

(15:57) Diminishing returns to finding software improvements might slow down progress

(20:39) Retraining AI models from scratch will slow down the pace of progress

(22:44) Running computationally expensive ML experiments may be a significant bottleneck to rapid software progress

(27:17) If AGI makes software progress much faster, than would be very risky

(27:37) Extremely dangerous capabilities might emerge rapidly

(29:07) Alignment solutions might break down rapidly

(30:09) It may be difficult to coordinate to slow down, if that is needed

(34:35) Labs should measure for early warning signs of AI accelerating the pace of AI progress

(37:49) Warning sign #1: AI doubles the pace of software progress

(38:27) Warning sign #2: AI completes wide-ranging and difficult AI RandD tasks

(39:43) Labs should put protective measures in place by the time they observe these warning signs

---

First published:

June 4th, 2024

Source:

https://www.lesswrong.com/posts/Nsmabb9fhpLuLdtLE/takeoff-speeds-presentation-at-anthropic

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

112,909 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,221 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

535 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,221 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners