
Sign up to save your podcasts
Or


Guest:
Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security
Topics:
Could you give our audience the quick version of what is an LLM and what things can they do vs not do? Is this "baby AGI" or is this a glorified "autocomplete"?
Let's talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data?
Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one?
How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they're a) talking nonsense or b) actually keeping their output safe?
Are hallucinations inherent to LLMs and can they ever be fixed?
So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?
Resources:
Retrieval Augmented Generation (or go ask Bard about it)
"New Paper: "Securing AI: Similar or Different?"" blog
By Anton Chuvakin4.8
3939 ratings
Guest:
Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security
Topics:
Could you give our audience the quick version of what is an LLM and what things can they do vs not do? Is this "baby AGI" or is this a glorified "autocomplete"?
Let's talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data?
Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one?
How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they're a) talking nonsense or b) actually keeping their output safe?
Are hallucinations inherent to LLMs and can they ever be fixed?
So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?
Resources:
Retrieval Augmented Generation (or go ask Bard about it)
"New Paper: "Securing AI: Similar or Different?"" blog

1,722 Listeners

4,424 Listeners

2,010 Listeners

373 Listeners

1,025 Listeners

347 Listeners

8,079 Listeners

177 Listeners

211 Listeners

58 Listeners

140 Listeners

29,300 Listeners

681 Listeners

168 Listeners

9 Listeners