
Sign up to save your podcasts
Or


Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world. We discuss the role of open models in enabling security research, the challenges of optimizing over certain constraints, and the ongoing difficulties in achieving robustness in neural networks. Finally, we delve into the future of AI security, and the need for a better approach to mitigate the risks posed by optimized adversarial attacks.
The complete show notes for this episode can be found at twimlai.com/go/678.
By Sam Charrington4.7
422422 ratings
Today we're joined by Jonas Geiping, a research group leader at the ELLIS Institute, to explore his paper: "Coercing LLMs to Do and Reveal (Almost) Anything". Jonas explains how neural networks can be exploited, highlighting the risk of deploying LLM agents that interact with the real world. We discuss the role of open models in enabling security research, the challenges of optimizing over certain constraints, and the ongoing difficulties in achieving robustness in neural networks. Finally, we delve into the future of AI security, and the need for a better approach to mitigate the risks posed by optimized adversarial attacks.
The complete show notes for this episode can be found at twimlai.com/go/678.

1,105 Listeners

166 Listeners

306 Listeners

343 Listeners

233 Listeners

212 Listeners

203 Listeners

313 Listeners

101 Listeners

551 Listeners

150 Listeners

101 Listeners

228 Listeners

688 Listeners

34 Listeners