
Sign up to save your podcasts
Or


How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM.
Questions asked:
(00:00) Introductions
(02:12) A bit about Rob Ragan
(03:33) AI in Security Assessment and Pentesting
(09:15) How is AI impacting pentesting?
(14:50 )Where to start with AI implementation in offensive Security?
(18:19) AI and Static Code Analysis
(21:57) Key components of LLM pentesting
(24:37) Testing whats inside a functional model?
(29:37) Whats the right way to threat model an LLM?
(33:52) Current State of Security Frameworks for LLMs
(43:04) Is AI changing how Red Teamers operate?
(44:46) A bit about Claude 3
(52:23) Where can you connect with Rob
Resources spoken about in this episode:
https://www.pentestmuse.ai/
https://github.com/AbstractEngine/pentest-muse-cli
https://docs.garak.ai/garak/
https://github.com/Azure/PyRIT
https://bishopfox.github.io/llm-testing-findings/
https://www.microsoft.com/en-us/research/project/autogen/
By Kaizenteq Team4.8
44 ratings
How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM.
Questions asked:
(00:00) Introductions
(02:12) A bit about Rob Ragan
(03:33) AI in Security Assessment and Pentesting
(09:15) How is AI impacting pentesting?
(14:50 )Where to start with AI implementation in offensive Security?
(18:19) AI and Static Code Analysis
(21:57) Key components of LLM pentesting
(24:37) Testing whats inside a functional model?
(29:37) Whats the right way to threat model an LLM?
(33:52) Current State of Security Frameworks for LLMs
(43:04) Is AI changing how Red Teamers operate?
(44:46) A bit about Claude 3
(52:23) Where can you connect with Rob
Resources spoken about in this episode:
https://www.pentestmuse.ai/
https://github.com/AbstractEngine/pentest-muse-cli
https://docs.garak.ai/garak/
https://github.com/Azure/PyRIT
https://bishopfox.github.io/llm-testing-findings/
https://www.microsoft.com/en-us/research/project/autogen/

369 Listeners

373 Listeners

155 Listeners

637 Listeners

1,017 Listeners

322 Listeners

175 Listeners

188 Listeners

210 Listeners

73 Listeners

134 Listeners

44 Listeners

97 Listeners

559 Listeners

134 Listeners