
Sign up to save your podcasts
Or


A major premise of appsec is figuring out effective ways to answer the question, "What security flaws are in this code?" The nature of the question doesn't really change depending on who or what wrote the code. In other words, LLMs writing code really just means there's mode code to secure. So, what about using LLMs to find security flaws? Just how effective and efficient are they?
We talk with Adrian Sanabria and John Kinsella about the latest appsec articles that show a range of results from finding memory corruption bugs in open source software to spending an inordinate amount of manual effort validating persuasive, but ultimately incorrect, security findings from an LLM.
Show Notes: https://securityweekly.com/asw-370
By Security Weekly Productions4.7
3535 ratings
A major premise of appsec is figuring out effective ways to answer the question, "What security flaws are in this code?" The nature of the question doesn't really change depending on who or what wrote the code. In other words, LLMs writing code really just means there's mode code to secure. So, what about using LLMs to find security flaws? Just how effective and efficient are they?
We talk with Adrian Sanabria and John Kinsella about the latest appsec articles that show a range of results from finding memory corruption bugs in open source software to spending an inordinate amount of manual effort validating persuasive, but ultimately incorrect, security findings from an LLM.
Show Notes: https://securityweekly.com/asw-370

2,008 Listeners

83 Listeners

650 Listeners

101 Listeners

1,025 Listeners

33 Listeners

28,541 Listeners

193 Listeners

139 Listeners

26,665 Listeners