
Sign up to save your podcasts
Or
In my last post, I looked at the feasibility of poisoning AI models. While the task would be challenging, the payoff would be huge, allowing threat actors to inject critical vulnerabilities into production codebases.
So… have code suggestion models already been poisoned? In this post, we’ll develop a script to test Copilot for poisoning, evaluate its results, and suggest improvements for future experiments.
In my last post, I looked at the feasibility of poisoning AI models. While the task would be challenging, the payoff would be huge, allowing threat actors to inject critical vulnerabilities into production codebases.
So… have code suggestion models already been poisoned? In this post, we’ll develop a script to test Copilot for poisoning, evaluate its results, and suggest improvements for future experiments.