
Sign up to save your podcasts
Or


CrowdStrike research into AI coding assistants reveals a new, subtle vulnerability surface: When DeepSeek-R1 receives prompts the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security flaws increases by up to 50%.
Stefan Stein, manager of the CrowdStrike Counter Adversary Operations Data Science team, joined Adam and Cristian for a live recording at Fal.Con 2025 to discuss how this project got started, the methodology behind the team’s research, and the significance of their findings.
The research began with a simple question: What are the security risks of using DeepSeek-R1 as a coding assistant? AI coding assistants are commonly used and often have access to sensitive information. Any systemic issue can have a major and far-reaching impact.
It concluded with the discovery that the presence of certain trigger words — such as mentions of Falun Gong, Uyghurs, or Tibet — in DeepSeek-R1 prompts can have severe effects on the quality and security of the code it produces. Unlike most large language model (LLM) security research focused on jailbreaks or prompt injections, this work exposes subtle biases that can lead to real-world vulnerabilities in production systems.
Tune in for a fascinating deep dive into how Stefan and his team explored the biases in DeepSeek-R1, the implications of this research, and what this means for organizations adopting AI.
By CrowdStrike4.9
7777 ratings
CrowdStrike research into AI coding assistants reveals a new, subtle vulnerability surface: When DeepSeek-R1 receives prompts the Chinese Communist Party (CCP) likely considers politically sensitive, the likelihood of it producing code with severe security flaws increases by up to 50%.
Stefan Stein, manager of the CrowdStrike Counter Adversary Operations Data Science team, joined Adam and Cristian for a live recording at Fal.Con 2025 to discuss how this project got started, the methodology behind the team’s research, and the significance of their findings.
The research began with a simple question: What are the security risks of using DeepSeek-R1 as a coding assistant? AI coding assistants are commonly used and often have access to sensitive information. Any systemic issue can have a major and far-reaching impact.
It concluded with the discovery that the presence of certain trigger words — such as mentions of Falun Gong, Uyghurs, or Tibet — in DeepSeek-R1 prompts can have severe effects on the quality and security of the code it produces. Unlike most large language model (LLM) security research focused on jailbreaks or prompt injections, this work exposes subtle biases that can lead to real-world vulnerabilities in production systems.
Tune in for a fascinating deep dive into how Stefan and his team explored the biases in DeepSeek-R1, the implications of this research, and what this means for organizations adopting AI.

189 Listeners

2,010 Listeners

371 Listeners

374 Listeners

651 Listeners

1,022 Listeners

318 Listeners

417 Listeners

8,049 Listeners

181 Listeners

316 Listeners

189 Listeners

74 Listeners

137 Listeners

44 Listeners