
Sign up to save your podcasts
Or


Guest:
Dr Gary McGraw, founder of the Berryville Institute of Machine Learning
Topics:
Gary, you've been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems?
If not SBOM for data or "DBOM", then what? Can data supply chain tools or just better data governance practices help?
How would you threat model a system with ML in it or a new ML system you are building?
What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
What are the key differences between securing the AI you built and AI you buy or subscribe to?
Resources:
Gary McGraw books
"An Architectural Risk Analysis Of Machine Learning Systems: Toward More Secure Machine Learning" paper
"What to think about when you're thinking about securing AI"
Annotated ML Security bibliography
Tay bot story (2016)
"Can you melt eggs?"
"Microsoft AI researchers accidentally leak 38TB of company data"
"Random number generator attack"
"Google's AI Red Team: the ethical hackers making AI safer"
By Anton Chuvakin4.8
3939 ratings
Guest:
Dr Gary McGraw, founder of the Berryville Institute of Machine Learning
Topics:
Gary, you've been doing software security for many decades, so tell us: are we really behind on securing ML and AI systems?
If not SBOM for data or "DBOM", then what? Can data supply chain tools or just better data governance practices help?
How would you threat model a system with ML in it or a new ML system you are building?
What are the key differences and similarities between securing AI and securing a traditional, complex enterprise system?
What are the key differences between securing the AI you built and AI you buy or subscribe to?
Resources:
Gary McGraw books
"An Architectural Risk Analysis Of Machine Learning Systems: Toward More Secure Machine Learning" paper
"What to think about when you're thinking about securing AI"
Annotated ML Security bibliography
Tay bot story (2016)
"Can you melt eggs?"
"Microsoft AI researchers accidentally leak 38TB of company data"
"Random number generator attack"
"Google's AI Red Team: the ethical hackers making AI safer"

2,008 Listeners

371 Listeners

651 Listeners

1,021 Listeners

319 Listeners

415 Listeners

8,061 Listeners

179 Listeners

315 Listeners

188 Listeners

205 Listeners

74 Listeners

57 Listeners

139 Listeners

44 Listeners