
Sign up to save your podcasts
Or
This is a lightly edited transcript of a presentation that I (Tom Davidson) gave at Anthropic in September 2023. See also the video recording, or the slides.
None of the content necessarily reflects the views of Anthropic or anyone who works there.
Summary:
---
Outline:
(02:19) Intro
(03:19) Software improvements have been a significant fraction of recent AI progress
(03:40) Efficiency improvements in pre-training algorithms are a significant driver of AI progress
(04:23) Post-training enhancements are significant drivers of AI progress
(05:28) Post-training enhancements can often be developed without significant computational experiments
(06:24) AGI might significantly accelerate the pace of AI software progress
(06:31) AI is beginning to accelerate AI progress.
(07:16) AGI will enable abundant cognitive labour for AI RandD.
(09:28) We dont know how abundant cognitive labour would affect the pace of AI progress
(12:04) AGI might enable 10X faster software progress, which would be very dramatic
(15:20) Bottlenecks might prevent AGI from significantly accelerating software progress
(15:57) Diminishing returns to finding software improvements might slow down progress
(20:39) Retraining AI models from scratch will slow down the pace of progress
(22:44) Running computationally expensive ML experiments may be a significant bottleneck to rapid software progress
(27:17) If AGI makes software progress much faster, than would be very risky
(27:37) Extremely dangerous capabilities might emerge rapidly
(29:07) Alignment solutions might break down rapidly
(30:09) It may be difficult to coordinate to slow down, if that is needed
(34:35) Labs should measure for early warning signs of AI accelerating the pace of AI progress
(37:49) Warning sign #1: AI doubles the pace of software progress
(38:27) Warning sign #2: AI completes wide-ranging and difficult AI RandD tasks
(39:43) Labs should put protective measures in place by the time they observe these warning signs
---
First published:
Source:
Narrated by TYPE III AUDIO.
This is a lightly edited transcript of a presentation that I (Tom Davidson) gave at Anthropic in September 2023. See also the video recording, or the slides.
None of the content necessarily reflects the views of Anthropic or anyone who works there.
Summary:
---
Outline:
(02:19) Intro
(03:19) Software improvements have been a significant fraction of recent AI progress
(03:40) Efficiency improvements in pre-training algorithms are a significant driver of AI progress
(04:23) Post-training enhancements are significant drivers of AI progress
(05:28) Post-training enhancements can often be developed without significant computational experiments
(06:24) AGI might significantly accelerate the pace of AI software progress
(06:31) AI is beginning to accelerate AI progress.
(07:16) AGI will enable abundant cognitive labour for AI RandD.
(09:28) We dont know how abundant cognitive labour would affect the pace of AI progress
(12:04) AGI might enable 10X faster software progress, which would be very dramatic
(15:20) Bottlenecks might prevent AGI from significantly accelerating software progress
(15:57) Diminishing returns to finding software improvements might slow down progress
(20:39) Retraining AI models from scratch will slow down the pace of progress
(22:44) Running computationally expensive ML experiments may be a significant bottleneck to rapid software progress
(27:17) If AGI makes software progress much faster, than would be very risky
(27:37) Extremely dangerous capabilities might emerge rapidly
(29:07) Alignment solutions might break down rapidly
(30:09) It may be difficult to coordinate to slow down, if that is needed
(34:35) Labs should measure for early warning signs of AI accelerating the pace of AI progress
(37:49) Warning sign #1: AI doubles the pace of software progress
(38:27) Warning sign #2: AI completes wide-ranging and difficult AI RandD tasks
(39:43) Labs should put protective measures in place by the time they observe these warning signs
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,446 Listeners
2,388 Listeners
7,910 Listeners
4,133 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,429 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
459 Listeners