Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI#28: Watching and Waiting, published by Zvi on September 8, 2023 on LessWrong.
We are, as Tyler Cowen has noted, in a bit of a lull. Those of us ahead of the curve have gotten used to GPT-4 and Claude-2 and MidJourney. Functionality and integration are expanding, but on a relatively slow pace. Most people remain blissfully unaware, allowing me to try out new explanations on them tabula rosa, and many others say it was all hype. Which they will keep saying, until something forces them not to, most likely Gemini, although it is worth noting the skepticism I am seeing regarding Gemini in 2023 (only 25% for Google to have the best model by end of year) or even in 2024 (only 41% to happen even by end of next year.)
I see this as part of a pattern of continuing good news. While we have a long way to go and very much face impossible problems, the discourse and Overton windows and awareness and understanding of the real problems have continuously improved in the past half year. Alignment interest and funding is growing rapidly, in and out of the major labs. Mundane utility has also steadily improved, with benefits dwarfing costs, and the mundane harms so far proving much lighter than almost anyone expected from the techs available. Capabilities are advancing at a rapid and alarming pace, but less rapidly and less alarmingly than I expected.
This week's highlights include an update on the UK taskforce and an interview with Suleyman of Inflection AI.
We're on a roll. Let's keep it up.
Even if this week's mundane utility is of, shall we say, questionable utility.
Table of Contents
Introduction.
Table of Contents.
Language Models Offer Mundane Utility. It's got its eye on you.
Language Models Don't Offer Mundane Utility. Google search ruined forever.
Deepfaketown and Botpocalypse Soon. I'll pass, thanks.
They Took Our Jobs. Better to not work in a biased way than not work at all?
Get Involved. Center for AI Policy and Rethink Priorities.
Introducing. Oh great, another competing subscription service.
UK Taskforce Update. Impressive team moving fast.
In Other AI News. AIs engage in deception, you say? Fooled me.
Quiet Speculations. Copyright law may be about to turn ugly.
The Quest for Sane Regulation. The full Schumer meeting list.
The Week in Audio. Suleyman on 80k, Altman, Schmidt and several others.
Rhetorical Innovation. Several more ways not to communicate.
No One Would Be So Stupid As To. Maximally autonomous DeepMind agents.
Aligning a Smarter Than Human Intelligence is Difficult. Easier to prove safety?
Twitter Community Notes Notes. Vitalik asks how it is so consistently good.
People Are Worried About AI Killing Everyone. Their worry level is slowly rising.
Other People Are Not As Worried About AI Killing Everyone. Tyler Cowen again.
The Lighter Side. Roon's got the beat.
Language Models Offer Mundane Utility
Do automatic chat moderation for Call of Duty. Given that the practical alternatives are that many games have zero chat and the others have chat filled with the most vile assembly of scum and villainy, I am less on the side of 'new dystopian hellscape' as much as 'what exactly is the better alternative here.'
Monitor your employees and customers.
Rowan Cheung: Meet the new AI Coffee Shop boss. It can track how productive baristas are and how much time customers spend in the shop. We're headed into wild times.
Fofr: This is horrible in so many ways.
It's not the tool, it is how you use it. Already some companies such as JPMorgan Chase use highly toxic dystopian monitoring tools, which lets them take to the next level. It seems highly useful to keep track of how long customers have been in the store, or whether they are repeat customers and how long they wait for orders. Tracking productivity in broad terms like orders filled is a case where too much precision and a...