Share Software Engineering Institute (SEI) Podcast Series
Share to email
Share to Facebook
Share to X
By Members of Technical Staff at the Software Engineering Institute
4.5
1818 ratings
The podcast currently has 436 episodes available.
One of the biggest challenges in collecting cybersecurity metrics is scoping down objectives and determining what kinds of data to gather. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Bill Nichols, who leads the SEI’s Software Engineering Measurements and Analysis Group, discusses the importance of cybersecurity measurement, what kinds of measurements are used in cybersecurity, and what those metrics can tell us about cyber systems.
To make secure software by design a reality, engineers must intentionally build security throughout the software development lifecycle. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Timothy A. Chick, technical manager of the Applied Systems Group in the SEI’s CERT Division, discusses building, designing, and operating secure systems.
Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), Katie Robinson and Violet Turri, researchers in the SEI’s AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.
In the wake of widespread adoption of artificial intelligence (AI) in critical infrastructure, education, government, and national security entities, adversaries are working to disrupt these systems and attack AI-enabled assets. With nearly four decades in vulnerability management, the Carnegie Mellon University Software Engineering Institute (SEI) recognized a need to create an entity that would identify, research, and identify mitigation strategies for AI vulnerabilities to protect national assets against traditional cybersecurity, adversarial machine learning, and joint cyber-AI attacks. In this SEI podcast, Lauren McIlvenny, director of threat analysis in the SEI’s CERT Division, discusses best practices and lessons learned in standing up an AI Security Incident Response Team (AISIRT).
The exposed and public nature of application programming interfaces (APIs) come with risks including the increased network attack surface. Zero trust principles are helpful for mitigating these risks and making APIs more secure. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), McKinley Sconiers-Hasan, a solutions engineer in the SEI CERT Division, discusses three API risks and how to address them through the lens of zero trust.
Capability-Based Planning (CBP) defines a framework that has an all-encompassing view of existing abilities and future needs for strategically deciding what is needed and how to effectively achieve it. Both business and government acquisition domains use CBP for financial success or to design a well-balanced defense system. The definitions understandably vary across these domains. In this SEI podcast, Anandi Hira, a data scientist, and William R. Nichols, an initiative lead for Software Engineering Measurement and Analysis, introduce CBP and its use and application in software acquisition.
What can the recently discovered vulnerabilities related to Rust tell us about the security of the language? In this podcast from the Carnegie Mellon University Software Engineering Institute, David Svoboda discusses two vulnerabilities, their sources, and how to mitigate them.
Cybersecurity risks aren’t just a national concern. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), the CERT division’s Tracy Bills, senior cybersecurity operations researcher and team lead, and James Lord, security operations technical manager, discuss the SEI’s work developing Computer Security Incident Response Teams (CSIRTs) across the globe.
Developers know that static analysis helps make code more secure. However, static analysis tools often produce a large number of false positives, hindering their usefulness. In this podcast from the Carnegie Mellon University Software Engineering Institute (SEI), David Svoboda, a software security engineer in the SEI’s CERT Division, discusses Redemption, a new open source tool from the SEI that automatically repairs common errors in C/C++ code generated from static analysis alerts, making code safer and static analysis less overwhelming.
The podcast currently has 436 episodes available.
268 Listeners
349 Listeners
611 Listeners
278 Listeners
40 Listeners
0 Listeners
630 Listeners
128 Listeners
917 Listeners
185 Listeners
938 Listeners
185 Listeners
299 Listeners
174 Listeners
0 Listeners
61 Listeners
108 Listeners