Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 10 quick takes about AGI, published by Max H on June 20, 2023 on LessWrong.
I have a bunch of loosely related and not fully fleshed out ideas for future posts.
In the spirit of 10 reasons why lists of 10 reasons might be a winning strategy, I've written some of them up as a list of facts / claims / predictions / takes. (Some of the explanations aren't exactly "quick", but you can just read the bold and move on if you find it uninteresting or unsurprising.)
If there's interest, I might turn some of them into their own posts or expand on them in the comments here.
Computational complexity theory does not say anything practical about the bounds on AI (or human) capabilities. Results from computational complexity theory are mainly facts about the limiting behavior of deterministic, fully general solutions to parameterized problems. For example, if a problem is NP-hard (and P≠NP), that implies that there is no deterministic algorithm anyone (even a superintelligence) can run, which accepts arbitrary instances of the problem and finds a solution in time steps polynomial in the size of the problem.
But that doesn't mean that any particular, non-parameterized instance of the problem cannot be solved some other way, e.g. by exploiting a regularity in the particular instance, using a heuristic or approximation or probabilistic solution, or that a human or AI can find a way of sidestepping the need to solve the problem entirely.Claims like "ideal utility maximisation is computationally intractable" or "If just one step in this plan is incomputable, the whole plan is as well." are thus somewhat misleading, or at least missing a step in their reasoning about why such claims are relevant as a bound on human or AI capabilities. My own suspicion is that when one attempts to repair these claims by making them more precise, it becomes clear that results from computational complexity theory are mostly irrelevant.
From here on, capabilities research won't fizzle out (no more AI winters). I predict that the main bottleneck on AI capabilities progress going forward will be researcher time to think up, design, implement, and run experiments. In the recent past, the compute and raw scale of AI systems was simply too little for many potential algorithmic innovations to work at all. Now that we're past that point, some non-zero fraction of new ideas that smart researchers think up and spend the time to test will "just work" at least somewhat, and these ideas will compound with other improvements in algorithms and scale. It's not quite recursive or self improvement yet, but we've reached some kind of criticality threshold on progress which is likely to make things get weird, faster than expected. My own prediction for what one aspect of this might look like is here.
Scaling laws and their implications, e.g. Chinchilla, are facts about particular architectures and training algorithms. As a perhaps non-obvious implication, I predict that future AI capabilities research progress will not be limited much by the availability of compute and / or training data. A few frames from a webcam may or may not be enough for a superintelligence to deduce general relativity, but the entire corpus of the current internet is almost certainly more than enough to train a below-human-level AI up to superhuman levels, even if the AI has to start with algorithms designed entirely by human capabilities researchers. (The fact that much of the training data was generated by humans is not relevant as a capabilities bound of systems trained on that data.)
"Human-level" intelligence is actually a pretty wide spectrum. Somewhat contra the classic diagram, I think that intelligence in humans spans a pretty wide range, even in absolute terms. Here, I'm using a meaning of intelligence which is roughly, the ability to re-arrang...