Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #29: Take a Deep Breath, published by Zvi on September 14, 2023 on LessWrong.
It works for the AI. Take a deep breath and work on this problem step-by-step was the strongest AI-generated custom instruction. You, a human, even have lungs and the ability to take an actual deep breath. You can also think step by step.
This week was especially friendly to such a proposal, allowing the shortest AI weekly to date and hopefully setting a new standard. It would be great to take some time for more long-term oriented posts on AI but also on things like the Jones Act, for catching my breath and, of course, some football.
And, of course, Happy New Year!
Table of Contents
Introduction.
Table of Contents.
Language Models Offer Mundane Utility. Take that deep breath.
Language Models Don't Offer Mundane Utility. Garbage in, garbage out.
Gary Marcus Claims LLMs Cannot Do Things GPT-4 Already Does. Indeed.
Fun With Image Generation. Where are our underlying item quality evaluators?
Deepfaketown and Botpocalypse Soon. AI girlfriends versus AI boyfriends.
Get Involved. Axios science and the new intriguing UK Gov ARIA research.
Introducing. Time AI 100 profiles 100 people more important than I am.
In Other AI News. UK taskforce assembles great team, OpenAI goes to Dublin.
Quiet Speculations. How easy or cheap to train another GPT-4 exactly?
The Quest for Sane Regulation. EU seems to be figuring more things out.
The Week in Audio. The fastest three minutes. A well deserved break.
Rhetorical Innovation. If AI means we lose our liberty, don't build it.
Were We So Stupid As To? What would have happened without warnings?
Aligning a Smarter Than Human Intelligence is Difficult. Not even a jailbreak.
Can You Speak Louder Directly Into the Microphone. Everyone needs to know.
Language Models Offer Mundane Utility
Live translate and sync your lips.
What are our best prompts? The ones the AI comes up with may surprise you (paper).
Break this down is new. Weirder is 'take a deep breath.' What a concept!
Why shouldn't we let the LLMs optimize the prompts we give to other LLMs? The medium term outcome is doubtless using LLMs to generate the prompts, using LLMs to decide which prompt is best in the situation (in response to on its own LLM-designed and tested prompt) and then implementing the resulting LLM output without a human checking first. Seems useful. Why have a giant inscrutable matrix when you can have a giant inscrutable web of instructions passed among and evaluated and optimized by different giant inscrutable matrices?
Organizing information is hard. There is no great solution to tracking limitless real time information in real time fashion, only better and worse solutions that make various tradeoffs. You certainly can't do this with a unified conceptually simple system.
My system is to accept that I am going to forget a lot of things, and try to engineer various flows to together do the highest value stuff without anything too robust or systematic. It definitely is not great.
My long term plan is 'AI solves this.' I can think of various ways to implement such a solution that seem promising. For now, I have not seen one that crosses the threshold of good enough to be worth using, but perhaps one of you has a suggestion?
Find you the correct ~1,000 calorie order at Taco Bell.
Language Models Don't Offer Mundane Utility
Nassim Taleb continues to be on Team Stochastic Parrot.
Nassim Taleb: If a chatbot writes a complete essay from a short prompt, the entropy of the essay must be exactly that of the initial prompt, no matter the length of the final product.
If the entropy of the output > prompt, you have no control over your essay.
In the Shannon sense, with a temperature of 0, you send the prompt and the reveivers will recreate the exact same message. In a broader sense, with all BS being th...