
Sign up to save your podcasts
Or


In this episode of AI + a16z, a trio of security experts join a16z partner Joel de la Garza to discuss the security implications of the DeepSeek reasoning model that made waves recently. It's three separate discussions, focusing on different aspects of DeepSeek and the fast-moving world of generative AI.
The first segment, with Ian Webster of Promptfoo, focuses on vulnerabilities within DeepSeek itself, and how users can protect themselves against backdoors, jailbreaks, and censorship.
The second segment, with Dylan Ayrey of Truffle Security, focuses on the advent of AI-generated code and how developers and security teams can ensure it's safe. As Dylan explains, many problem lie in how the underlying models were trained and how their security alignment was carried out.
The final segment features Brian Long of Adaptive, who highlights a growing list of risk vectors for deepfakes and other threats that generative AI can exacerbate. In his view, it's up to individuals and organizations to keep sharp about what's possible — while the the arms race between hackers and white-hat AI agents kicks into gear.
Learn more:
What Are the Security Risks of Deploying DeepSeek-R1?
Research finds 12,000 ‘Live’ API Keys and Passwords in DeepSeek's Training Data
Follow everybody on social media:
Ian Webster
Dylan Ayrey
Brian Long
Joel de la Garza
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
By a16z4.6
2929 ratings
In this episode of AI + a16z, a trio of security experts join a16z partner Joel de la Garza to discuss the security implications of the DeepSeek reasoning model that made waves recently. It's three separate discussions, focusing on different aspects of DeepSeek and the fast-moving world of generative AI.
The first segment, with Ian Webster of Promptfoo, focuses on vulnerabilities within DeepSeek itself, and how users can protect themselves against backdoors, jailbreaks, and censorship.
The second segment, with Dylan Ayrey of Truffle Security, focuses on the advent of AI-generated code and how developers and security teams can ensure it's safe. As Dylan explains, many problem lie in how the underlying models were trained and how their security alignment was carried out.
The final segment features Brian Long of Adaptive, who highlights a growing list of risk vectors for deepfakes and other threats that generative AI can exacerbate. In his view, it's up to individuals and organizations to keep sharp about what's possible — while the the arms race between hackers and white-hat AI agents kicks into gear.
Learn more:
What Are the Security Risks of Deploying DeepSeek-R1?
Research finds 12,000 ‘Live’ API Keys and Passwords in DeepSeek's Training Data
Follow everybody on social media:
Ian Webster
Dylan Ayrey
Brian Long
Joel de la Garza
Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.
Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

1,295 Listeners

533 Listeners

180 Listeners

1,099 Listeners

347 Listeners

225 Listeners

208 Listeners

518 Listeners

153 Listeners

61 Listeners

139 Listeners

139 Listeners

20 Listeners

41 Listeners

42 Listeners