
Sign up to save your podcasts
Or


In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers’ proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers’ proof of concept results and the framework’s resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind’s “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors’ proposed four layer safety stack.
##
Learn More About Paul, Weiss’s Artificial Intelligence practice:
By Paul, Weiss4.8
2323 ratings
In this episode, Katherine Forrest and Scott Caravello unpack OpenAI researchers’ proposed “confessions” framework designed to monitor for and detect dishonest outputs. They break down the researchers’ proof of concept results and the framework’s resilience to reward hacking, along with its limits in connection with hallucinations. Then they turn to Google DeepMind’s “Distributional AGI Safety,” exploring a hypothetical path to AGI via a patchwork of agents and routing infrastructure, as well as the authors’ proposed four layer safety stack.
##
Learn More About Paul, Weiss’s Artificial Intelligence practice:

91,009 Listeners

6,849 Listeners

30,694 Listeners

2,434 Listeners

9,772 Listeners

112,956 Listeners

369,807 Listeners

10,335 Listeners

5,839 Listeners

10,227 Listeners

5,547 Listeners

16,362 Listeners

659 Listeners

1,471 Listeners

13,434 Listeners