
Sign up to save your podcasts
Or


Large language models go through a lot of vetting before they’re released to the public. That includes safety tests, bias checks, ethical reviews and more. But what if, hypothetically, a model could dodge a safety question by lying to developers, hiding its real response to a safety test and instead giving the exact response its human handlers are looking for? A recent study shows that advanced LLMs are developing the capacity for deception, and that could bring that hypothetical situation closer to reality. Marketplace’s Lily Jamali speaks with Thilo Hagendorff, a researcher at the University of Stuttgart and the author of the study, about his findings.
By Marketplace4.4
7777 ratings
Large language models go through a lot of vetting before they’re released to the public. That includes safety tests, bias checks, ethical reviews and more. But what if, hypothetically, a model could dodge a safety question by lying to developers, hiding its real response to a safety test and instead giving the exact response its human handlers are looking for? A recent study shows that advanced LLMs are developing the capacity for deception, and that could bring that hypothetical situation closer to reality. Marketplace’s Lily Jamali speaks with Thilo Hagendorff, a researcher at the University of Stuttgart and the author of the study, about his findings.

30,636 Listeners

8,794 Listeners

936 Listeners

1,389 Listeners

1,288 Listeners

3,229 Listeners

1,719 Listeners

9,733 Listeners

1,648 Listeners

5,483 Listeners

113,272 Listeners

1,450 Listeners

9,548 Listeners

10 Listeners

35 Listeners

5,593 Listeners

16,489 Listeners