
Sign up to save your podcasts
Or


Language models are not particularly good at generating funny jokes. Asked for their funniest jokes, Claude 3.7 gives us:
Why don't scientists trust atoms? Because they make up everything!
o3 gives us:
Why don't scientists trust atoms anymore? Because they make up everything—and they just can't keep their quarks straight!
and Gemini 2.5 Pro gives us…
Why don't scientists trust atoms? Because they make up everything!
Hilarious. Can we do better than that? Of course, we could try different variations on the prompt, until the model comes up with something slightly more original. But why do the boring thing when we have the power of reinforcement learning?
Our setup will be as follows: we'll have Qwen3-8B suggest jokes, GPT-4.1 score them, and we'll run iterations of GRPO on Qwen's outputs until Qwen generates the funniest possible joke, according to GPT.
Experiment 1: Reward Originality
The first llm-as-judge reward we [...]
---
Outline:
(01:23) Experiment 1: Reward Originality
(04:59) Experiment 2: Ok fine, just reward humor, but tell it to consider originality
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongLanguage models are not particularly good at generating funny jokes. Asked for their funniest jokes, Claude 3.7 gives us:
Why don't scientists trust atoms? Because they make up everything!
o3 gives us:
Why don't scientists trust atoms anymore? Because they make up everything—and they just can't keep their quarks straight!
and Gemini 2.5 Pro gives us…
Why don't scientists trust atoms? Because they make up everything!
Hilarious. Can we do better than that? Of course, we could try different variations on the prompt, until the model comes up with something slightly more original. But why do the boring thing when we have the power of reinforcement learning?
Our setup will be as follows: we'll have Qwen3-8B suggest jokes, GPT-4.1 score them, and we'll run iterations of GRPO on Qwen's outputs until Qwen generates the funniest possible joke, according to GPT.
Experiment 1: Reward Originality
The first llm-as-judge reward we [...]
---
Outline:
(01:23) Experiment 1: Reward Originality
(04:59) Experiment 2: Ok fine, just reward humor, but tell it to consider originality
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,366 Listeners

2,438 Listeners

8,995 Listeners

4,148 Listeners

92 Listeners

1,595 Listeners

9,913 Listeners

90 Listeners

71 Listeners

5,471 Listeners

16,082 Listeners

536 Listeners

131 Listeners

95 Listeners

519 Listeners