The Nonlinear Library

LW - Some Intuitions Around Short AI Timelines Based on Recent Progress by Aaron Scher


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some Intuitions Around Short AI Timelines Based on Recent Progress, published by Aaron Scher on April 11, 2023 on LessWrong.
tldr: I give some informal evidence and intuitions that point toward AGI coming soon. These include thinking about how crazy the last year has been, beliefs from those in major AI labs, and progress on MMLU.
Intro
This post is intended to be a low-effort reference I can point people to when I say I think there is some evidence for short AI timelines. I might describe the various bits of evidence and intuitions presented here as “intuitions around short AI timelines based on recent progress” (though perhaps there are better terms). They are not a thorough model like Ajeya’s; insofar as somebody is using multiple models when putting together a timelines estimate, I think it would be unreasonable to place less than 20% or greater than 95% weight on extrapolation from current systems and recent progress.
In the spirit of being informal, you can use whatever definition of AGI you like. I mostly use AGI to refer to something like “an AI system which can do pretty much all the human cognitive tasks as well or better than humans (~99% of tasks people in 2023 do).”
Some evidence
I (Aaron) started following AI and AI existential safety around the beginning of 2022; it’s been a little over a year. Some of that time was my understanding catching up with advances from the past couple years, but there have also been major advances.
Some major advances since I’ve been paying attention: Chinchilla paper popularized the scaling laws that were already known to some, there was some DALL-E and related stuff which was cool, CICERO happened which I didn’t follow but indicates we’re probably going to train the AIs to do all the dangerous stuff (see also Auto-GPT and Chaos-GPT, or GPT-4 getting plugins within 2 weeks of release, as more recent updates in this saga of indignity), ChatGPT shows how much more usable models are with RLHF (popularizing methods that have been known for a while), Med-PaLM gets a passing score on the US medical licensing exam (also tons of other PaLM and Flan-PaLM results I haven’t followed but which seem impressive). LLaMA and Alpaca take powerful capabilities from compute-efficient (and over) training and hand them to the public. GPT-4 blows the competition out of the water on many benchmarks.
I probably missed a couple big things (including projects which honorably didn’t publicly push SOTA, 1, 2); the list is probably a bit out of order; I’ve also included things from 2023; but man, that sure is a year of progress.
I don’t expect there are all that many more years with this much progress before we hit AGI — maybe 3-12 years. More importantly, I think this ~15 month period, especially November 2022-now, has generated a large amount of hype and investment in AI research and products. We seem to be on a path such that — in every future year before we die — there is more talent+effort+money working on improving AI capabilities than there was in 2022. I hold some hope that major warning shots and/or regulation would change this picture, in fact I think it’s pretty likely we’ll see warning shots beyond those we have already seen, but I am not too optimistic about what the response will be. As crazy as 2022 was, we should be pretty prepared for a world that gets significantly crazier. I find it hard to imagine that we could have all that many more years that look like 2022 AI progress-wise, especially with significantly increased interest in AI.
A large amount of the public thinks AGI is near. I believe these people are mostly just thinking about how good current systems (GPT-4) are and informally extrapolating
[image description: A Twitter poll from Lex Fridman where he asks “When will superintelligent general AI (AGI) arrive?” There are 270,00...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings