
Sign up to save your podcasts
Or


Send us a text
Prepare to have your mind expanded as we navigate the complex labyrinth of large language models and the cybersecurity threats they harbor. We dissect a groundbreaking paper that exposes how AI titans are susceptible to a slew of sophisticated cyber assaults, from prompt hacking to adversarial attacks and the less discussed but equally alarming issue of gradient exposure.
As the conversation unfolds, we unravel the unnerving potential for these intelligent systems to inadvertently spill the beans on confidential training data, a privacy nightmare that transcends academic speculation and poses tangible security threats.
Resources: https://arxiv.org/pdf/2402.00888.pdf
Support the show
By Cameron Ivey4.7
2929 ratings
Send us a text
Prepare to have your mind expanded as we navigate the complex labyrinth of large language models and the cybersecurity threats they harbor. We dissect a groundbreaking paper that exposes how AI titans are susceptible to a slew of sophisticated cyber assaults, from prompt hacking to adversarial attacks and the less discussed but equally alarming issue of gradient exposure.
As the conversation unfolds, we unravel the unnerving potential for these intelligent systems to inadvertently spill the beans on confidential training data, a privacy nightmare that transcends academic speculation and poses tangible security threats.
Resources: https://arxiv.org/pdf/2402.00888.pdf
Support the show

32,008 Listeners

3,855 Listeners

4,649 Listeners

4,343 Listeners

2,010 Listeners

1,022 Listeners

68 Listeners

87,290 Listeners

6,433 Listeners

23 Listeners

139 Listeners

13 Listeners

6 Listeners

239 Listeners

5 Listeners