
Sign up to save your podcasts
Or


Guest:
David LaBianca, Senior Engineering Director, Google
Topics:
The universe of AI risks is broad and deep. We've made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it's gotten so much traction and c) talk about where we go next with it?
The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term?
Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about?
How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF)? Does this also complement emerging AI security standards?
AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape?
What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group?
We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them? In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it?
An off-the-cuff question, how to do AI governance well?
Resources:
CoSAI site, CoSAI 3 projects
SAIF main site
Gen AI governance: 10 tips to level up your AI program
"Securing AI: Similar or Different?" paper
Our Security of AI Papers and Blogs Explained
By Anton Chuvakin4.8
3939 ratings
Guest:
David LaBianca, Senior Engineering Director, Google
Topics:
The universe of AI risks is broad and deep. We've made a lot of headway with our SAIF framework: can you give us a) a 90 second tour of SAIF and b) share how it's gotten so much traction and c) talk about where we go next with it?
The Coalition for Secure AI (CoSAI) is a collaborative effort to address AI security challenges. What are Google's specific goals and expectations for CoSAI, and how will its success be measured in the long term?
Something we love about CoSAI is that we involved some unexpected folks, notably Microsoft and OpenAI. How did that come about?
How do we plan to work with existing organizations, such as Frontier Model Forum (FMF) and Open Source Security Foundation (OpenSSF)? Does this also complement emerging AI security standards?
AI is moving quickly. How do we intend to keep up with the pace of change when it comes to emerging threat techniques and actors in the landscape?
What do we expect to see out of CoSAI work and when? What should people be looking forward to and what are you most looking forward to releasing from the group?
We have proposed projects for CoSAI, including developing a defender's framework and addressing software supply chain security for AI systems. How can others use them? In other words, if I am a mid-sized bank CISO, do I care? How do I benefit from it?
An off-the-cuff question, how to do AI governance well?
Resources:
CoSAI site, CoSAI 3 projects
SAIF main site
Gen AI governance: 10 tips to level up your AI program
"Securing AI: Similar or Different?" paper
Our Security of AI Papers and Blogs Explained

2,006 Listeners

372 Listeners

651 Listeners

1,020 Listeners

319 Listeners

416 Listeners

8,057 Listeners

179 Listeners

315 Listeners

188 Listeners

205 Listeners

74 Listeners

57 Listeners

139 Listeners

44 Listeners