Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #58: Stargate AGI, published by Zvi on April 5, 2024 on LessWrong.
Another round? Of economists projecting absurdly small impacts, of Google publishing highly valuable research, a cycle of rhetoric, more jailbreaks, and so on. Another great podcast from Dwarkesh Patel, this time going more technical. Another proposed project with a name that reveals quite a lot. A few genuinely new things, as well. On the new offerings front, DALLE-3 now allows image editing, so that's pretty cool.
Table of Contents
Don't miss out on Dwarkesh Patel's podcast with Sholto Douglas and Trenton Bricken, which got the full write-up treatment.
Introduction.
Table of Contents.
Language Models Offer Mundane Utility. Never stop learning.
Language Models Don't Offer Mundane Utility. The internet is still for porn.
Clauding Along. Good at summarization but not fact checking.
Fun With Image Generation. DALLE-3 now has image editing.
Deepfaketown and Botpocalypse Soon. OpenAI previews voice duplication.
They Took Our Jobs. Employment keeps rising, will continue until it goes down.
The Art of the Jailbreak. It's easy if you try and try again.
Cybersecurity. Things worked out this time.
Get Involved. Technical AI Safety Conference in Tokyo tomorrow.
Introducing. Grok 1.5, 25 YC company models and 'Dark Gemini.'
In Other AI News. Seriously, Google, stop publishing all your trade secrets.
Stargate AGI. New giant data center project, great choice of cautionary title.
Larry Summers Watch. Economists continue to have faith in nothing happening.
Quiet Speculations. What about interest rates? Also AI personhood.
AI Doomer Dark Money Astroturf Update. OpenPhil annual report.
The Quest for Sane Regulations. The devil is in the details.
The Week in Audio. A few additional offerings this week.
Rhetorical Innovation. The search for better critics continues.
Aligning a Smarter Than Human Intelligence is Difficult. What are human values?
People Are Worried About AI Killing Everyone. Can one man fight the future?
The Lighter Side. The art must have an end other than itself.
Language Models Offer Mundane Utility
A good encapsulation of a common theme here:
Paul Graham: AI will magnify the already great difference in knowledge between the people who are eager to learn and those who aren't.
If you want to learn, AI will be great at helping you learn.
If you want to avoid learning? AI is happy to help with that too.
Which AI to use? Ethan Mollick examines our current state of play.
Ethan Mollick (I edited in the list structure): There is a lot of debate over which of these models are best, with dueling tests suggesting one or another dominates, but the answer is not clear cut. All three have different personalities and strengths, depending on whether you are coding or writing.
Gemini is an excellent explainer but doesn't let you upload files.
GPT-4 has features (namely Code Interpreter and GPTs) that greatly extend what it can do.
Claude is the best writer and seems capable of surprising insight.
But beyond the differences, there are four important similarities to know about:
All three are full of ghosts, which is to say that they give you the weird illusion of talking to a real, sentient being - even though they aren't.
All three are multimodal, in that they can "see" images.
None of them come with instructions.
They all prompt pretty similarly to each other.
I would add there are actually four models, not three, because there are (at last!) two Geminis, Gemini Advanced and Gemini Pro 1.5, if you have access to the 1.5 beta. So I would add a fourth line for Gemini Pro 1.5:
Gemini Pro has a giant context window and uses it well.
My current heuristic is something like this:
If you need basic facts or explanation, use Gemini Advanced.
If you want creativity or require intelligence and nuance, or code, use Claude.
If ...