EP213 From Promise to Practice: LLMs for Anomaly Detection and Real-World Cloud Security
Guest:
Yigael Berger, Head of AI, Sweet Security
Topic:
Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains?
I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be?
Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale?
SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders?
Resource:
EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud
EP194 Deep Dive into ADR - Application Detection and Response
EP135 AI and Security: The Good, the Bad, and the Magical
EP213 From Promise to Practice: LLMs for Anomaly Detection and Real-World Cloud Security
Guest:
Yigael Berger, Head of AI, Sweet Security
Topic:
Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains?
I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be?
Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale?
SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders?
Resource:
EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud
EP194 Deep Dive into ADR - Application Detection and Response
EP135 AI and Security: The Good, the Bad, and the Magical