
Sign up to save your podcasts
Or
How can LLMs be valuable to developers as an assistant in finding and fixing insecure code? There are a lot of implications in trusting AI or LLMs to not only find vulns, but in producing code that fixes an underlying problem without changing an app's intended behavior. Stuart McClure explains how combining LLMs with agents and RAGs helps make AI-influenced tools more effective and useful in the context that developers need -- writing secure code.
Show Notes: https://securityweekly.com/asw-291
4.7
3535 ratings
How can LLMs be valuable to developers as an assistant in finding and fixing insecure code? There are a lot of implications in trusting AI or LLMs to not only find vulns, but in producing code that fixes an underlying problem without changing an app's intended behavior. Stuart McClure explains how combining LLMs with agents and RAGs helps make AI-influenced tools more effective and useful in the context that developers need -- writing secure code.
Show Notes: https://securityweekly.com/asw-291
1,962 Listeners
2,014 Listeners
363 Listeners
633 Listeners
372 Listeners
1,007 Listeners
314 Listeners
390 Listeners
7,803 Listeners
945 Listeners
141 Listeners
189 Listeners
193 Listeners
120 Listeners
33 Listeners