
Sign up to save your podcasts
Or


In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers’ proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers’ proof of concept results and the framework’s resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind’s “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors’ proposed four layer safety stack.
##
Learn More About Paul, Weiss’s Artificial Intelligence practice:
By Paul, Weiss4.8
2323 ratings
In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers’ proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers’ proof of concept results and the framework’s resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind’s “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors’ proposed four layer safety stack.
##
Learn More About Paul, Weiss’s Artificial Intelligence practice:

90,964 Listeners

6,835 Listeners

30,874 Listeners

2,426 Listeners

9,654 Listeners

113,041 Listeners

369,720 Listeners

10,294 Listeners

5,820 Listeners

10,253 Listeners

5,559 Listeners

16,229 Listeners

641 Listeners

1,444 Listeners

13,106 Listeners