
Sign up to save your podcasts
Or


How can LLMs be valuable to developers as an assistant in finding and fixing insecure code? There are a lot of implications in trusting AI or LLMs to not only find vulns, but in producing code that fixes an underlying problem without changing an app's intended behavior. Stuart McClure explains how combining LLMs with agents and RAGs helps make AI-influenced tools more effective and useful in the context that developers need -- writing secure code.
Show Notes: https://securityweekly.com/asw-291
By Security Weekly Productions4.7
3535 ratings
How can LLMs be valuable to developers as an assistant in finding and fixing insecure code? There are a lot of implications in trusting AI or LLMs to not only find vulns, but in producing code that fixes an underlying problem without changing an app's intended behavior. Stuart McClure explains how combining LLMs with agents and RAGs helps make AI-influenced tools more effective and useful in the context that developers need -- writing secure code.
Show Notes: https://securityweekly.com/asw-291

2,007 Listeners

83 Listeners

650 Listeners

101 Listeners

1,031 Listeners

33 Listeners

28,533 Listeners

191 Listeners

138 Listeners

26,657 Listeners