The Nonlinear Library: Alignment Forum

AF - There is no IQ for AI by Gabriel Alfour


Listen Later

Link to original article

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: There is no IQ for AI, published by Gabriel Alfour on November 27, 2023 on The AI Alignment Forum.
Most disagreement about AI Safety strategy and regulation stems from our inability to forecast how dangerous future systems will be. This inability means that even the best minds are operating on a vibe when discussing AI, AGI, SuperIntelligence, Godlike-AI and similar endgame scenarios. The trouble is that vibes are hard to operationalize and pin down. We don't have good processes for systematically debating vibes.
Here, I'll do my best and try to dissect one such vibe: the implicit belief in the existence of predictable intelligence thresholds that AI will reach.
This implicit belief is at the core of many disagreements, so much so that it leads to massively conflicting views in the wild. For example:
Yoshua Bengio writes an FAQ about
Catastrophic Risks from Superhuman AI and Geoffrey Hinton
left Google to warn about these risks. Meanwhile, the other Godfather of AI, Yann Lecunn, states that those concerns are overblown because we are "
nowhere near Cat-level and Dog-level AI". This is crazy! In a sane world we should anticipate technical experts to agree on technical matters, not to have completely opposite views predicated on vague notions of the IQ level of models.
People spend a lot of time arguing over
AI Takeoff speeds which are difficult to operationalize. Many of these arguments are based on a notion of the general power level of models, rather than considering discrete AI capabilities. Given that the general power level of models is a vibe rather than a concrete fact of reality, it means disagreements revolving around them can't be resolved.
AGI means
100 different things, from
talking virtual assistants in HER to OpenAI talking about "
capturing the light cone of all future value in the universe". The range of possibilities that are seriously considered implies "vibes-based" models, rather than something concrete enough to encourage convergent views.
Recent efforts to mimic Biosafety Levels in AI with a typology define
the highest risks of AI as "speculative". The fact that "speculative" doesn't outright say "maximally dangerous" or "existentially dangerous" points also to "vibes-based" models. The whole point of Biosafety Levels is to define containment procedures for dangerous research. The most dangerous level should be the most serious and concrete one - the risks so obvious that we should work hard to prevent them from coming into existence. As it currently stands, "speculative" means that we are not actively optimizing to reduce these risks, but are instead waltzing towards them based on the off-chance that things might go fine by themselves.
A major source of confusion in all of the above examples stems from the implicit idea that there is something like an "AI IQ", and that we can notice that various thresholds are met as it keeps increasing.
People believe that they don't believe in AI having an IQ, but then they keep acting as if it existed, and condition their theory of change on AI IQ existing. This is a clear example of
an alief: an intuition that is in tension with one's more reasonable beliefs. Here, I will try to make this alief salient, and drill down on why it is wrong. My hope is that after this post, it will become easier to notice whenever the AI IQ vibe surfaces and corrupts thinking. That way, when it does, it can more easily be contested.
Surely, no one believes in AI IQ?
The Vibe, Illustrated
AI IQ is not a belief that is endorsed. If you asked anyone about it, they would tell you that obviously, AI doesn't have an IQ.
It is indeed a vibe.
However, when I say "it's a vibe", it should not be understood as "it is merely a vibe". Indeed, a major part of our thinking is done through vibes, even in Science. Most of the reasoning scientist...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear Library: Alignment ForumBy The Nonlinear Fund


More shows like The Nonlinear Library: Alignment Forum

View all
AXRP - the AI X-risk Research Podcast by Daniel Filan

AXRP - the AI X-risk Research Podcast

9 Listeners