Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #15: The Principle of Charity, published by Zvi on June 8, 2023 on LessWrong.
The sky is not blue. Not today, not in New York City. At least it’s now mostly white, yesterday it was orange. Even indoors, everyone is coughing and our heads don’t feel right. I can’t think fully straight. Life comes at you fast.
Thus, I’m going with what I have, and then mostly taking time off until this clears up. Hopefully that won’t be more than a few more days.
The Principle of Charity comes into play this week because of two posts, by people I greatly respect as thinkers and trust to want good things for the world, making arguments that are remarkably terrible. I wrote detailed responses to the arguments within, then realized that was completely missing the point, and deleted them. Instead, next week I plan to explain my model of what is going on there – I wish they’d stop doing what they are doing, and think they would be wise to stop doing it, but to the extent I am right about what is causing these outputs, I truly sympathize.
For a day we were all talking about a Vice story that sounded too good (or rather, too perfect) to be true, and then it turned out that it was indeed totally made up. Time to take stock of our epistemic procedures and do better next time.
Table of Contents
Introduction
Table of Contents
Language Models Offer Mundane Utility. Quite a lot, actually.
Language Models Don’t Offer Mundane Utility. Not with that attitude.
Deepfaketown and Botpocalypse Soon. Talk to your. parents?
Fun with Image Generation. Investors, man.
Vigilance. It must be eternal, until it won’t be enough.
Introducing. Falcon the open source model, Lightspeed Grants and more.
In Other AI News. Senator asks Meta the question we all ask: Llama?
They Took Our Jobs. First they came for the copywriters.
Out of the Box Thinking. A hard to detect attack on audio activation devices.
I Was Promised Driverless Cars. You’ll get them. Eventually.
If It Sounds Too Good to be True. Guess what?
Quiet Speculations. Algorithmic improvements take center stage for a bit.
A Very Good Sentence. Including a claim it was a very bad sentence.
The Quest for Sane Regulation. Devil is in the details. UK PM steps up.
The Week in Podcasts and Other Audio and Video. Don’t look.
What Exactly is Alignment? It’s a tricky word.
Rhetorical Innovation. Several good ideas.
Have You Tried Not Doing Things That Might Kill Everyone? It’s a thought.
People Who Are Worried People Aren’t Reasoning From World Models.
People Are Worried About AI Killing Everyone. But let’s keep our cool, shall we?
Other People Are Not Worried About AI Killing Everyone. Because of reasons.
The Wit and Wisdom of Sam Altman. The little things are yours forever.
The Lighter Side. Sorry about all that.
Language Models Offer Mundane Utility
Claim that you can get GPT-4 or GPT-3.5 into some sort of ‘logic mode’ where it can play chess ‘better than the old stockfish 8.’
Shako reports crazy amounts of GPT-4 mundane utility , similar to Google before it.
I remember using google circa 2000. Basically as soon as I used it, I began to use it for everything. It was this indexed tool that opened up entire worlds to me. I could learn new things, discover communities, answer questions.
But even as the years went by, other than some nerds or particularly smart people, people barely used it? They didn’t integrate it into their workflow. It was baffling to me.
I remember a single instance where I asked a dr a question, and he googled it in front of me and searched some sources, and being impressed that he was thoughtful enough to use google to improve his problem solving on-the-fly.
In some sense my entire career was built on google. I’m self taught in most things I know, and while I have learned from a lot of sources, the google index tied them all together and let me ...