
Sign up to save your podcasts
Or


Guest:
Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security
Topics:
Could you give our audience the quick version of what is an LLM and what things can they do vs not do? Is this "baby AGI" or is this a glorified "autocomplete"?
Let's talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data?
Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one?
How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they're a) talking nonsense or b) actually keeping their output safe?
Are hallucinations inherent to LLMs and can they ever be fixed?
So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?
Resources:
Retrieval Augmented Generation (or go ask Bard about it)
"New Paper: "Securing AI: Similar or Different?"" blog
By Anton Chuvakin4.8
3939 ratings
Guest:
Kathryn Shih, Group Product Manager, LLM Lead in Google Cloud Security
Topics:
Could you give our audience the quick version of what is an LLM and what things can they do vs not do? Is this "baby AGI" or is this a glorified "autocomplete"?
Let's talk about the different ways to tune the models, and when we think about tuning what are the ways that attackers might influence or steal our data?
Can you help our security listener leaders have the right vocabulary and concepts to reason about the risk of their information a) going into an LLM and b) getting regurgitated by one?
How do I keep the output of a model safe, and what questions do I need to ask a vendor to understand if they're a) talking nonsense or b) actually keeping their output safe?
Are hallucinations inherent to LLMs and can they ever be fixed?
So there are risks to data and new opportunities for attacks and hallucinations. How do we know good opportunities in the area given the risks?
Resources:
Retrieval Augmented Generation (or go ask Bard about it)
"New Paper: "Securing AI: Similar or Different?"" blog

2,006 Listeners

372 Listeners

651 Listeners

1,020 Listeners

319 Listeners

416 Listeners

8,057 Listeners

179 Listeners

315 Listeners

188 Listeners

205 Listeners

74 Listeners

57 Listeners

139 Listeners

44 Listeners