
Sign up to save your podcasts
Or


In the wake of widespread adoption of artificial intelligence (AI) in critical infrastructure, education, government, and national security entities, adversaries are working to disrupt these systems and attack AI-enabled assets. With nearly four decades in vulnerability management, the Carnegie Mellon University Software Engineering Institute (SEI) recognized a need to create an entity that would identify, research, and identify mitigation strategies for AI vulnerabilities to protect national assets against traditional cybersecurity, adversarial machine learning, and joint cyber-AI attacks. In this SEI podcast, Lauren McIlvenny, director of threat analysis in the SEI's CERT Division, discusses best practices and lessons learned in standing up an AI Security Incident Response Team (AISIRT).
By Members of Technical Staff at the Software Engineering Institute4.5
1818 ratings
In the wake of widespread adoption of artificial intelligence (AI) in critical infrastructure, education, government, and national security entities, adversaries are working to disrupt these systems and attack AI-enabled assets. With nearly four decades in vulnerability management, the Carnegie Mellon University Software Engineering Institute (SEI) recognized a need to create an entity that would identify, research, and identify mitigation strategies for AI vulnerabilities to protect national assets against traditional cybersecurity, adversarial machine learning, and joint cyber-AI attacks. In this SEI podcast, Lauren McIlvenny, director of threat analysis in the SEI's CERT Division, discusses best practices and lessons learned in standing up an AI Security Incident Response Team (AISIRT).

32,246 Listeners

273 Listeners

26,380 Listeners

1,105 Listeners

626 Listeners

371 Listeners

651 Listeners

44 Listeners

317 Listeners

8,077 Listeners

73 Listeners

0 Listeners

0 Listeners

6,097 Listeners

1,348 Listeners

139 Listeners

16,525 Listeners