Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #6: Agents of Change, published by Zvi on April 6, 2023 on LessWrong.
If you didn’t have any future shock over the past two months, either you weren’t paying attention to AI developments or I am very curious how you managed that.
I would not exactly call this week’s pace of events slow. It was still distinctly slower than that which we have seen in the previous six weeks of updates. I don’t feel zero future shock, but I feel substantially less. We have now had a few weeks to wrap our heads around GPT-4. We are adjusting to the new reality. That which blew minds a few months ago is the new normal.
The big events of last week were the FLI letter calling for a six month pause, and Eliezer Yudkowsky’s letter in Time Magazine, along with the responses to both. Additional responses to the FLI letter continue, and are covered in their own section.
I didn’t have time last week to properly respond on Eliezer’s letter, so I put that post out yesterday. I’m flagging that post as important.
In terms of capabilities things quieted down. The biggest development is that people continue to furiously do their best to turn GPT-4 into a self-directed agent. At this point, I’m happy to see people working hard at this, so we don’t have an ‘agent overhang’ – if it is this easy to do, we want everything that can possibly go wrong to go wrong as quickly as possible, while the damage would be relatively contained.
Table of Contents
I am continuing the principle of having lots of often very short sections, when I think things are worth noticing on their own.
Table of Contents. Here you go.
Executive Summary. Relative calm.
Language Models Offer Mundane Utility. The usual incremental examples.
GPT-4 Token Compression. Needs more investigation. It’s not lossless.
Your AI Not an Agent? There, I Fixed It. What could possibly go wrong?
Google vs. Microsoft Continued. Will all these agents doom Google? No.
Gemini. Google Brain and DeepMind, working together at last.
Deepfaketown and Botpocalypse Soon. Very little to report here.
Copyright Law in the Age of AI. Human contribution is required for copyright.
Fun With Image, Sound and Video Generation. Real time voice transformation.
They Took Our Jobs. If that happened to you, perhaps it was your fault.
Italy Takes a Stand. ChatGPT banned in Italy. Will others follow?
Level One Bard. Noting that Google trained Bard on ChatGPT output.
Art of the Jailbreak. Secret messages. Warning: May not stay secret.
Securing AI Systems. Claims that current AI systems could be secured.
More Than Meets The Eye. Does one need direct experience with transformers?
In Other AI News. Various other things that happened.
Quiet Speculations. A grab bag of other suggestions and theories.
Additional Responses from the FHI Letter and Proposed Pause. Patterns are clear.
Cowen versus Alexander Continued. A failure to communicate.
Warning Shots. The way we are going, we will be fortunate enough to get some.
Regulating the Use Versus the Tech. Short term regulate use. Long term? Tech.
People Are Worried About AI Killing Everyone. You don’t say?
OpenAI Announces Its Approach To and Definition of AI Safety. Short term only.
17 Reasons Why Danger From AGI Is More Serious Than Nuclear Weapons.
Reasonable NotKillEveryoneism Takes. We increasingly get them.
Bad NotKillEveryoneism Takes. These too.
Enemies of the People. As in, all the people. Some take this position.
It’s Happening. Life finds a way.
The Lighter Side. Did I tell you the one about recursive self-improvement yet?
Executive Summary
The larger structure is as per usual.
Sections #3-#18 are primarily about AI capabilities developments.
Sections #19-#28 are about the existential dangers of capabilities developments.
Sections #29-#30 are for fun to take us out.
I’d say the most important capabilities section this week is prob...