That Was The Week

Accelerating to 2027?


Listen Later

Hat Tip to this week’s creators: @leopoldasch, @JoeSlater87, @GaryMarcus, @ulonnaya, @alex, @ttunguz, @mmasnick, @dannyrimer, @imdavidpierce, @asafitch, @ylecun, @nxthompson, @kaifulee, @DaphneKoller, @AndrewYNg, @aidangomez, @Kyle_L_Wiggers, @waynema, @QianerLiu, @nicnewman, @nmasc_, @steph_palazzolo, @nofilmschool

Contents

* Editorial: 

* Essays of the Week

* Situational Awareness: The Decade Ahead

* ChatGPT is b******t

* AGI by 2027?

* Ilya Sutskever, OpenAI’s former chief scientist, launches new AI company

* The Series A Crunch Is No Joke

* The Series A Crunch or the Seedpocalypse of 2024 

* The Surgeon General Is Wrong. Social Media Doesn’t Need Warning Labels

* Video of the Week

* Danny Rimer on 20VC - (Must See)

* AI of the Week

* Anthropic has a fast new AI model — and a clever new way to interact with chatbots

* Nvidia’s Ascent to Most Valuable Company Has Echoes of Dot-Com Boom

* The Expanding Universe of Generative Models

* DeepMind’s new AI generates soundtracks and dialogue for videos

* News Of the Week

* Apple Suspends Work on Next Vision Pro, Focused on Releasing Cheaper Model in Late 2025

* Is the news industry ready for another pivot to video?

* Cerebras, an Nvidia Challenger, Files for IPO Confidentially

* Startup of the Week

* Final Cut Camera and iPad Multicam are Truly Revolutionary

* X of the Week

* Leopold Aschenbrenner

Editorial

I had not heard of Leopold Aschenbrenner until yesterday. I was meeting with Faraj Aalaei (a SignalRank board member) and my colleague Rob Hodgkinson when they began to talk about “Situational Awareness,” his essay on the future of AGI, and its likely speed of emergence.

So I had to read it, and it is this week’s essay of the week. He starts his 165-page epic with:

Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them.

So, Leopold is not humble. He finds himself “among” the few people with situational awareness.

As a person prone to bigging up myself, I am not one to prematurely judge somebody’s view of self. So, I read all 165 pages.

He makes one point. The growth of AI capability is accelerating. More is being done at a lower cost, and the trend will continue to be super-intelligence by 2027. At that point, billions of skilled bots will solve problems at a rate we cannot imagine. And they will work together, with little human input, to do so.

His case is developed using linear progression from current developments. According to Leopold, all you have to believe in is straight lines.

He also has a secondary narrative related to safety, particularly the safety of models and their weightings (how they achieve their results).

By safety, he does not mean the models will do bad things. He means that third parties, namely China, can steal the weightings and reproduce the results. He focuses on the poor security surrounding models as the problem. And he deems governments unaware of the dangers.

Although German-born, he argues in favor of the US-led effort to see AGI as a weapon to defeat China and threatens dire consequences if it does not. He sees the “free world” as in danger unless it stops others from gaining the sophistication he predicts in the time he predicts.

At that point, I felt I was reading a manifesto for World War Three.

But as I see it, the smartest people in the space have converged on a different perspective, a third way, one I will dub AGI Realism. The core tenets are simple:

* Superintelligence is a matter of national security. We are rapidly building machines smarter than the smartest humans. This is not another cool Silicon Valley boom; this isn’t some random community of coders writing an innocent open source software package; this isn’t fun and games. Superintelligence is going to be wild; it will be the most powerful weapon mankind has ever built. And for any of us involved, it’ll be the most important thing we ever do. 

* America must lead. The torch of liberty will not survive Xi getting AGI first. (And, realistically, American leadership is the only path to safe AGI, too.) That means we can’t simply “pause”; it means we need to rapidly scale up US power production to build the AGI clusters in the US. But it also means amateur startup security delivering the nuclear secrets to the CCP won’t cut it anymore, and it means the core AGI infrastructure must be controlled by America, not some dictator in the Middle East. American AI labs must put the national interest first. 

* We need to not screw it up. Recognizing the power of superintelligence also means recognizing its peril. There are very real safety risks; very real risks this all goes awry—whether it be because mankind uses the destructive power brought forth for our mutual annihilation, or because, yes, the alien species we’re summoning is one we cannot yet fully control. These are manageable—but improvising won’t cut it. Navigating these perils will require good people bringing a level of seriousness to the table that has not yet been offered. 

As the acceleration intensifies, I only expect the discourse to get more shrill. But my greatest hope is that there will be those who feel the weight of what is coming, and take it as a solemn call to duty.

I persisted in reading it, and I think you should, too—not for the war-mongering element but for the core acceleration thesis.

My two cents: Leopold underestimates AI's impact in the long run and overestimates it in the short term, but he is directionally correct.

Anthropic released v3.5 of Claude.ai today. It is far faster than the impressive 3.0 version (released a few months ago) and costs a fraction to train and run. it is also more capable. It accepts text and images and has a new feature that allows it to run code, edit documents, and preview designs called ‘Artifacts.’

Claude 3.5 Opus is probably not far away.

Situational Awareness projects trends like this into the near future, and his views are extrapolated from that perspective.

Contrast that paper with “ChatGPT is B******t,” a paper coming out of Glasgow University in the UK. The three authors contest the accusation that ChatGPT hallucinates or lies. They claim that because it is a probabilistic word finder, it spouts b******t. It can be right, and it can be wrong, but it does not know the difference. It’s a bullshitter.

Hilariously, they define three types of BS:

B******t (general)

Any utterance produced where a speaker has indifference towards the truth of the utterance.

Hard b******t

B******t produced with the intention to mislead the audience about the utterer’s agenda.

Soft b******t

B******t produced without the intention to mislead the hearer regarding the utterer’s agenda.

They then conclude:

With this distinction in hand, we’re now in a position to consider a worry of the following sort: Is ChatGPT hard b**********g, soft b**********g, or neither? We will argue, first, that ChatGPT, and other LLMs, are clearly soft b**********g. However, the question of whether these chatbots are hard b**********g is a trickier one, and depends on a number of complex questions concerning whether ChatGPT can be ascribed intentions.

This is closer to Gary Marcus's point of view in his ‘AGI by 2027?’ response to Leopold. It is also below.

I think the reality is somewhere between Leopold and Marcus. AI is capable of surprising things, given that it is only a probabilistic word-finder. And its ability to do so is becoming cheaper and faster. The number of times it is useful easily outweighs, for me, the times it is not. Most importantly, AI agents will work together to improve each other and learn faster.

However, Gary Marcus is right that reasoning and other essential decision-making characteristics are not logically derived from an LLM approach to knowledge. So, without additional or perhaps different elements, there will be limits to where it can go. Gary probably underestimates what CAN be achieved with LLMs (indeed, who would have thought they could do what they already do). And Leopold probably overestimates the lack of a ceiling in what they will do and how fast that will happen.

It will be fascinating to watch. I, for one, have no idea what to expect except the unexpected. OpenAI Founder Illya Sutskever weighed in, too, with a new AI startup called Safe Superintelligence Inc. (SSI). The most important word here is superintelligence, the same word Leopold used. The next phase is focused on higher-than-human intelligence, which can be reproduced billions of times to create scaled Superintelligence.The Expanding Universe of Generative Models piece below places smart people in the room to discuss these developments. Yann LeCun, Nicholas Thompson, Kai-Fu Lee, Daphne Koller, Andrew Ng, and Aidan Gomez are participants.



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.thatwastheweek.com/subscribe
...more
View all episodesView all episodes
Download on the App Store

That Was The WeekBy Keith Teare

  • 5
  • 5
  • 5
  • 5
  • 5

5

3 ratings


More shows like That Was The Week

View all
Economist Podcasts by The Economist

Economist Podcasts

4,278 Listeners

Fresh Air by NPR

Fresh Air

38,147 Listeners

War on the Rocks by Ryan Evans

War on the Rocks

1,066 Listeners

Hidden Brain by Hidden Brain, Shankar Vedantam

Hidden Brain

43,376 Listeners

Revisionist History by Pushkin Industries

Revisionist History

59,465 Listeners

The Daily by The New York Times

The Daily

111,250 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

15,188 Listeners

The Rest Is Politics: US by Goalhanger

The Rest Is Politics: US

2,278 Listeners