
Sign up to save your podcasts
Or


How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM.
Questions asked:
(00:00) Introductions
(02:12) A bit about Rob Ragan
(03:33) AI in Security Assessment and Pentesting
(09:15) How is AI impacting pentesting?
(14:50 )Where to start with AI implementation in offensive Security?
(18:19) AI and Static Code Analysis
(21:57) Key components of LLM pentesting
(24:37) Testing whats inside a functional model?
(29:37) Whats the right way to threat model an LLM?
(33:52) Current State of Security Frameworks for LLMs
(43:04) Is AI changing how Red Teamers operate?
(44:46) A bit about Claude 3
(52:23) Where can you connect with Rob
Resources spoken about in this episode:
https://www.pentestmuse.ai/
https://github.com/AbstractEngine/pentest-muse-cli
https://docs.garak.ai/garak/
https://github.com/Azure/PyRIT
https://bishopfox.github.io/llm-testing-findings/
https://www.microsoft.com/en-us/research/project/autogen/
By Kaizenteq Team4.9
99 ratings
How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM.
Questions asked:
(00:00) Introductions
(02:12) A bit about Rob Ragan
(03:33) AI in Security Assessment and Pentesting
(09:15) How is AI impacting pentesting?
(14:50 )Where to start with AI implementation in offensive Security?
(18:19) AI and Static Code Analysis
(21:57) Key components of LLM pentesting
(24:37) Testing whats inside a functional model?
(29:37) Whats the right way to threat model an LLM?
(33:52) Current State of Security Frameworks for LLMs
(43:04) Is AI changing how Red Teamers operate?
(44:46) A bit about Claude 3
(52:23) Where can you connect with Rob
Resources spoken about in this episode:
https://www.pentestmuse.ai/
https://github.com/AbstractEngine/pentest-muse-cli
https://docs.garak.ai/garak/
https://github.com/Azure/PyRIT
https://bishopfox.github.io/llm-testing-findings/
https://www.microsoft.com/en-us/research/project/autogen/

1,100 Listeners

374 Listeners

1,034 Listeners

2,343 Listeners

348 Listeners

178 Listeners

203 Listeners

199 Listeners

58 Listeners

10,278 Listeners

138 Listeners

40 Listeners

8,709 Listeners

637 Listeners

33 Listeners