
Sign up to save your podcasts
Or


Large language models are helping developers move faster than ever. But behind the convenience of AI-generated code lies a security vulnerability: package hallucinations. In this episode, Ashok sits down with U.S. Army cybersecurity officer and PhD researcher Joe Spracklen to unpack new research on how hallucinated package names—fake libraries that don't yet exist—can be weaponized by attackers and quietly introduced into your software supply chain.
Joe's recent academic study reveals how large language models like ChatGPT and Code Llama are frequently recommending software packages that don't actually exist—yet. These fake suggestions create the perfect opportunity for attackers to register malicious packages with those names, compromising developer machines and potentially entire corporate networks. Whether your team is deep into AI pair programming or just starting to experiment, this conversation surfaces key questions every tech leader should be asking before pushing AI-generated code to production.
Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.
Inside the episode...
What "package hallucinations" are and why they matter
How AI code assistants can introduce real vulnerabilities into your network
Which models were most likely to hallucinate packages
Why hallucinated package names are often persistent—not random
How attackers could weaponize hallucinated names to spread malware
What mitigation strategies were tested—and which ones failed
Why simple retrieval-based techniques (like RAG) don't solve the problem
Steps security-conscious teams can take today to protect their environments
The importance of developer awareness as more non-traditional engineers enter the field
Mentioned in this episode
Python Package Index (PyPI)
npm JavaScript package registry
Snyk, Socket.dev, Phylum (dependency monitoring tools)
Artifactory, Nexus, Verdaccio (private package registries)
ChatGPT, Code Llama, DeepSeek (AI models tested)
Subscribe to the Convergence podcast wherever you get podcasts including video episodes on YouTube at youtube.com/@convergencefmpodcast
Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.
Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.
Subscribe to the Convergence podcast wherever you get podcasts including video episodes to get updated on the other crucial conversations that we'll post on YouTube at youtube.com/@convergencefmpodcast
Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.
Follow the Pod
Linkedin: https://www.linkedin.com/company/convergence-podcast/
X: https://twitter.com/podconvergence
Instagram: @podconvergence
By Ashok Sivanand4.9
1919 ratings
Large language models are helping developers move faster than ever. But behind the convenience of AI-generated code lies a security vulnerability: package hallucinations. In this episode, Ashok sits down with U.S. Army cybersecurity officer and PhD researcher Joe Spracklen to unpack new research on how hallucinated package names—fake libraries that don't yet exist—can be weaponized by attackers and quietly introduced into your software supply chain.
Joe's recent academic study reveals how large language models like ChatGPT and Code Llama are frequently recommending software packages that don't actually exist—yet. These fake suggestions create the perfect opportunity for attackers to register malicious packages with those names, compromising developer machines and potentially entire corporate networks. Whether your team is deep into AI pair programming or just starting to experiment, this conversation surfaces key questions every tech leader should be asking before pushing AI-generated code to production.
Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.
Inside the episode...
What "package hallucinations" are and why they matter
How AI code assistants can introduce real vulnerabilities into your network
Which models were most likely to hallucinate packages
Why hallucinated package names are often persistent—not random
How attackers could weaponize hallucinated names to spread malware
What mitigation strategies were tested—and which ones failed
Why simple retrieval-based techniques (like RAG) don't solve the problem
Steps security-conscious teams can take today to protect their environments
The importance of developer awareness as more non-traditional engineers enter the field
Mentioned in this episode
Python Package Index (PyPI)
npm JavaScript package registry
Snyk, Socket.dev, Phylum (dependency monitoring tools)
Artifactory, Nexus, Verdaccio (private package registries)
ChatGPT, Code Llama, DeepSeek (AI models tested)
Subscribe to the Convergence podcast wherever you get podcasts including video episodes on YouTube at youtube.com/@convergencefmpodcast
Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.
Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge.
Subscribe to the Convergence podcast wherever you get podcasts including video episodes to get updated on the other crucial conversations that we'll post on YouTube at youtube.com/@convergencefmpodcast
Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.
Follow the Pod
Linkedin: https://www.linkedin.com/company/convergence-podcast/
X: https://twitter.com/podconvergence
Instagram: @podconvergence

10,222 Listeners