
Sign up to save your podcasts
Or


A major premise of appsec is figuring out effective ways to answer the question, "What security flaws are in this code?" The nature of the question doesn't really change depending on who or what wrote the code. In other words, LLMs writing code really just means there's mode code to secure. So, what about using LLMs to find security flaws? Just how effective and efficient are they?
We talk with Adrian Sanabria and John Kinsella about the latest appsec articles that show a range of results from finding memory corruption bugs in open source software to spending an inordinate amount of manual effort validating persuasive, but ultimately incorrect, security findings from an LLM.
Visit https://www.securityweekly.com/asw for all the latest episodes!
Show Notes: https://securityweekly.com/asw-370
By Mike Shema4.9
1212 ratings
A major premise of appsec is figuring out effective ways to answer the question, "What security flaws are in this code?" The nature of the question doesn't really change depending on who or what wrote the code. In other words, LLMs writing code really just means there's mode code to secure. So, what about using LLMs to find security flaws? Just how effective and efficient are they?
We talk with Adrian Sanabria and John Kinsella about the latest appsec articles that show a range of results from finding memory corruption bugs in open source software to spending an inordinate amount of manual effort validating persuasive, but ultimately incorrect, security findings from an LLM.
Visit https://www.securityweekly.com/asw for all the latest episodes!
Show Notes: https://securityweekly.com/asw-370

2,011 Listeners

372 Listeners

371 Listeners

651 Listeners

1,028 Listeners

36 Listeners

3 Listeners

418 Listeners

8,077 Listeners

175 Listeners

195 Listeners

73 Listeners

139 Listeners

45 Listeners

168 Listeners