
Sign up to save your podcasts
Or


How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM.
Questions asked:
(00:00) Introductions
(02:12) A bit about Rob Ragan
(03:33) AI in Security Assessment and Pentesting
(09:15) How is AI impacting pentesting?
(14:50 )Where to start with AI implementation in offensive Security?
(18:19) AI and Static Code Analysis
(21:57) Key components of LLM pentesting
(24:37) Testing whats inside a functional model?
(29:37) Whats the right way to threat model an LLM?
(33:52) Current State of Security Frameworks for LLMs
(43:04) Is AI changing how Red Teamers operate?
(44:46) A bit about Claude 3
(52:23) Where can you connect with Rob
Resources spoken about in this episode:
https://www.pentestmuse.ai/
https://github.com/AbstractEngine/pentest-muse-cli
https://docs.garak.ai/garak/
https://github.com/Azure/PyRIT
https://bishopfox.github.io/llm-testing-findings/
https://www.microsoft.com/en-us/research/project/autogen/
By Kaizenteq Team4.9
88 ratings
How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM.
Questions asked:
(00:00) Introductions
(02:12) A bit about Rob Ragan
(03:33) AI in Security Assessment and Pentesting
(09:15) How is AI impacting pentesting?
(14:50 )Where to start with AI implementation in offensive Security?
(18:19) AI and Static Code Analysis
(21:57) Key components of LLM pentesting
(24:37) Testing whats inside a functional model?
(29:37) Whats the right way to threat model an LLM?
(33:52) Current State of Security Frameworks for LLMs
(43:04) Is AI changing how Red Teamers operate?
(44:46) A bit about Claude 3
(52:23) Where can you connect with Rob
Resources spoken about in this episode:
https://www.pentestmuse.ai/
https://github.com/AbstractEngine/pentest-muse-cli
https://docs.garak.ai/garak/
https://github.com/Azure/PyRIT
https://bishopfox.github.io/llm-testing-findings/
https://www.microsoft.com/en-us/research/project/autogen/

374 Listeners

655 Listeners

1,023 Listeners

333 Listeners

318 Listeners

8,041 Listeners

181 Listeners

315 Listeners

211 Listeners

57 Listeners

138 Listeners

610 Listeners

35 Listeners

39 Listeners

0 Listeners