
Sign up to save your podcasts
Or


Terrible name (with a terrible reason, that this ‘resets the counter’ on AI capability to 1, and ‘o’ as in OpenAI when they previously used o for Omni, very confusing). Impressive new capabilities in many ways. Less impressive in many others, at least relative to its hype.
Clearly this is an important capabilities improvement. However, it is not a 5-level model, and in important senses the ‘raw G’ underlying the system hasn’t improved.
GPT-o1 seems to get its new capabilities by taking (effectively) GPT-4o, and then using extensive Chain of Thought (CoT) and quite a lot of tokens. Thus that unlocks (a lot of) what that can unlock. We did not previously know how to usefully do that. Now we do. It gets much better at formal logic and reasoning, things in the ‘system 2’ bucket. That matters a lot for many tasks, if not as much [...]
---
Outline:
(01:26) Introducing GPT-o1
(05:05) Evals
(07:55) Chain of Thought
(08:57) Coding
(11:08) Human Preference Evaluation
(11:37) What Is It?
(20:24) Doing Math Without Terrance Tao
(25:02) Doing Real Math with Terence Tao
(30:04) Positive Examples
(38:51) Skeptical Reactions
(42:32) Report from Janus World
(45:30) Same Old Silly Examples
(53:47) Latency
(55:14) Paths Forward Unrelated to Safety
(59:17) Safety Last
(01:07:06) Deception
(01:10:50) External Red Teaming
(01:11:23) Apollo's Red Teaming Finds Deceptive Alignment
(01:22:17) Preparedness Testing Finds Reward Hacking
(01:26:43) METR's Red Teaming
(01:29:52) What Are the Safety and Policy Implications?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By zvi5
22 ratings
Terrible name (with a terrible reason, that this ‘resets the counter’ on AI capability to 1, and ‘o’ as in OpenAI when they previously used o for Omni, very confusing). Impressive new capabilities in many ways. Less impressive in many others, at least relative to its hype.
Clearly this is an important capabilities improvement. However, it is not a 5-level model, and in important senses the ‘raw G’ underlying the system hasn’t improved.
GPT-o1 seems to get its new capabilities by taking (effectively) GPT-4o, and then using extensive Chain of Thought (CoT) and quite a lot of tokens. Thus that unlocks (a lot of) what that can unlock. We did not previously know how to usefully do that. Now we do. It gets much better at formal logic and reasoning, things in the ‘system 2’ bucket. That matters a lot for many tasks, if not as much [...]
---
Outline:
(01:26) Introducing GPT-o1
(05:05) Evals
(07:55) Chain of Thought
(08:57) Coding
(11:08) Human Preference Evaluation
(11:37) What Is It?
(20:24) Doing Math Without Terrance Tao
(25:02) Doing Real Math with Terence Tao
(30:04) Positive Examples
(38:51) Skeptical Reactions
(42:32) Report from Janus World
(45:30) Same Old Silly Examples
(53:47) Latency
(55:14) Paths Forward Unrelated to Safety
(59:17) Safety Last
(01:07:06) Deception
(01:10:50) External Red Teaming
(01:11:23) Apollo's Red Teaming Finds Deceptive Alignment
(01:22:17) Preparedness Testing Finds Reward Hacking
(01:26:43) METR's Red Teaming
(01:29:52) What Are the Safety and Policy Implications?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,398 Listeners

2,467 Listeners

1,099 Listeners

109 Listeners

295 Listeners

89 Listeners

551 Listeners

5,546 Listeners

140 Listeners

14 Listeners

140 Listeners

155 Listeners

458 Listeners

0 Listeners

143 Listeners