
Sign up to save your podcasts
Or
GitHub's Copilot experienced a security breach where it leaked sensitive data from previously public repositories that were later made private. Researchers discovered that Copilot retained information even after the repositories were no longer public, impacting over 16,000 organizations. Microsoft initially classified the issue as low severity, drawing criticism for its handling of user privacy. The AI model could regurgitate sensitive data, like API keys and proprietary code, potentially introducing leaked information into other projects. Experts recommend immediately rotating keys and credentials that were ever in a public repo. The incident highlights the risks of AI models training on public data that later becomes private, a growing concern in AI security.
Send us a text
Support the show
Podcast:
https://kabir.buzzsprout.com
YouTube:
https://www.youtube.com/@kabirtechdives
Please subscribe and share.
4.7
3333 ratings
GitHub's Copilot experienced a security breach where it leaked sensitive data from previously public repositories that were later made private. Researchers discovered that Copilot retained information even after the repositories were no longer public, impacting over 16,000 organizations. Microsoft initially classified the issue as low severity, drawing criticism for its handling of user privacy. The AI model could regurgitate sensitive data, like API keys and proprietary code, potentially introducing leaked information into other projects. Experts recommend immediately rotating keys and credentials that were ever in a public repo. The incident highlights the risks of AI models training on public data that later becomes private, a growing concern in AI security.
Send us a text
Support the show
Podcast:
https://kabir.buzzsprout.com
YouTube:
https://www.youtube.com/@kabirtechdives
Please subscribe and share.
5,422 Listeners