
Sign up to save your podcasts
Or


How can LLMs be valuable to developers as an assistant in finding and fixing insecure code? There are a lot of implications in trusting AI or LLMs to not only find vulns, but in producing code that fixes an underlying problem without changing an app's intended behavior. Stuart McClure explains how combining LLMs with agents and RAGs helps make AI-influenced tools more effective and useful in the context that developers need -- writing secure code.
Cloudflare's 2024 appsec report, reasoning about the Cyber Reasoning Systems for the upcoming AIxCC semifinals at DEF CON, lessons in secure design from post-quantum cryptography, and more!
Visit https://www.securityweekly.com/asw for all the latest episodes!
Show Notes: https://securityweekly.com/asw-291
By Security Weekly Productions4.4
208208 ratings
How can LLMs be valuable to developers as an assistant in finding and fixing insecure code? There are a lot of implications in trusting AI or LLMs to not only find vulns, but in producing code that fixes an underlying problem without changing an app's intended behavior. Stuart McClure explains how combining LLMs with agents and RAGs helps make AI-influenced tools more effective and useful in the context that developers need -- writing secure code.
Cloudflare's 2024 appsec report, reasoning about the Cyber Reasoning Systems for the upcoming AIxCC semifinals at DEF CON, lessons in secure design from post-quantum cryptography, and more!
Visit https://www.securityweekly.com/asw for all the latest episodes!
Show Notes: https://securityweekly.com/asw-291

32,246 Listeners

30,609 Listeners

7,913 Listeners

187 Listeners

2,011 Listeners

507 Listeners

371 Listeners

651 Listeners

1,028 Listeners

16 Listeners

418 Listeners

8,077 Listeners

964 Listeners

175 Listeners

139 Listeners