Share AI CyberSecurity Podcast
Share to email
Share to Facebook
Share to X
What were the key AI Cybersecurity trends at BlackHat USA? In this episode of the AI Cybersecurity Podcast, hosts Ashish Rajan and Caleb Sima dive into the key insights from Black Hat 2024. From the AI Summit to the CISO Summit, they explore the most critical themes shaping the cybersecurity landscape, including deepfakes, AI in cybersecurity tools, and automation. The episode also features discussions on the rising concerns among CISOs regarding AI platforms and what these mean for security leaders.
Questions asked:
(00:00) Introduction
(02:49) Black Hat, DEF CON and RSA Conference
(07:18) Black Hat CISO Summit and CISO Concerns
(11:14) Use Cases for AI in Cybersecurity
(21:16) Are people tired of AI?
(21:40) AI is mostly a side feature
(25:06) LLM Firewalls and Access Management
(28:16) The data security challenge in AI
(29:28) The trend with Deepfakes
(35:28) The trend of pentest automation
(38:48) The role of an AI Security Engineer
In this episode of the AI Cybersecurity Podcast, we dive deep into the latest findings from Google's DeepMind report on the misuse of generative AI. Hosts Ashish and Caleb explore over 200 real-world cases of AI misuse across critical sectors like healthcare, education, and public services. They discuss how AI tools are being used to create deepfakes, fake content, and more, often with minimal technical expertise. They analyze these threats from a CISO's perspective but also include an intriguing comparison between human analysis and AI-generated insights using tools like ChatGPT and Anthropic's Claude. From the rise of AI-powered impersonation to the manipulation of public opinion, this episode uncovers the real dangers posed by generative AI in today’s world.
Questions asked:
(00:00) Introduction
(03:39) Generative Multimodal Artificial Intelligence
(09:16) Introduction to the report
(17:07) Enterprise Compromise of GenAI systems
(20:23) Gen AI Systems Compromise
(27:11) Human vs Machine
Resources spoken about during the episode:
Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data
How much can we really trust AI-generated code more over Human generated Code today? How does AI-Generated code compare to Human generated code in 2024? Caleb and Ashish spoke to Guy Podjarny, Founder and CEO at Tessl about the evolving world of AI generated code, the current state and future trajectory of AI in software development. They discuss the reliability of AI-generated code compared to human-generated code, the potential security risks, and the necessary precautions organizations must take to safeguard their systems.
Guy has also recently launched his own podcast with Simon Maple called The AI Native Dev, which you can check out if you are interested in hearing more about the AI Native development space.
Questions asked:
(00:00) Introduction
(02:36) What is AI Generated Code?
(03:45) Should we trust AI Generated Code?
(14:34) The current usage of AI in Code Generated
(18:27) Securing AI Generated Code
(23:44) Reality of Security AI Generated Code Today
(30:22) The evolution of Security Testing
(37:36) Where to start with AI Security today?
(50:18) Evolution of the broader cybersecurity industry with AI
(54:03) The Positives of AI for Cybersecurity
(01:00:48) The startup Landscape around AI
(01:03:16) The future of AppSec
(01:05:53) The future of security with AI
Which AI Security Framework is right for you? As AI is gaining momentum, we are starting to see quite a few frameworks appearing but the question is, which one should you start with and can AI help you decide! Caleb and Ashish tackle this challenge head-on, comparing three major AI security frameworks: Databricks, NIST, and OWASP Top 10. They break down the key components of each framework, discuss practical implementation strategies, and provide actionable insights for CISOs and security leaders. They may have had some help along the way.
Questions asked:
(00:00) Introduction
(02:54) Databricks AI Security Framework (DASF)
(06: 38) Top 3 things from DASF by Claude 3
(07:32) Top 3 things from DASF by ChatGPT
(08:46) DASF Use Case Scenario
(11:01) Thoughts on DASF
(13:18) OWASP Top 10 for LLM Models
(20:12) Google's Secure AI Framework (SAIF)
(21:31) NIST AI Risk Management Framework
(25:18) Claude 3 summarises NIST RMF for 5 year old
(28:00) ChatGPT compares NIST RMF and NIST CSF
(28:48) How do the frameworks compare?
(36:46) Summary of all the frameworks
Resources from this episode:
Databricks AI Security Framework (DASF)
OWASP Top 10 for LLM
NIST AI Risk Management Framework
Google Secure AI Framework
What is the current state and future potential of AI Security? This special episode was recorded LIVE at BSidesSF (thats why its a little noisy), as we were amongst all the exciting action. Clint Gibler, Caleb Sima and Ashish Rajan sat down to talk about practical uses of AI today, how AI will transform security operations, if AI can be trusted to manage permissions and the importance of understanding AI's limitations and strengths.
Questions asked:
(00:00) Introduction
(02:24) A bit about Clint Gibler
(03:10) What top of mind with AI Security?
(04:13) tldr of Clint’s BSide SF Talk
(08:33) AI Summarisation of Technical Content
(09:47) Clint’s favourite part of the talk - Fuzzing
(15:30) Questions Clint got about his talk
(17:11) Human oversight and AI
(25:04) Perfection getting in the way of good
(30:15) AI on the engineering side
(36:31) Predictions for AI Security
Resources from this coversation:
Caleb's Keynote at BSides SF
Clint's Newsletter
Key AI Security takeaways from RSA Conference 2024, BSides SF 2024 and all the fringe activities that happen in SF during that week. Caleb and Ashish were speakers, panelists, participating in several events during that week and this episode captures all the highlights from all the conversations they had and they trends they saw during what they dubbed the "Cybersecurity Fringe Festival” in SF.
Questions asked:
(00:00) Introduction
(02:53) Caleb’s Keynote at BSides SF
(05:14) Clint Gibler’s Bsides SF Talk
(06:28) What are BSides Conferences?
(13:55) Cybersecurity Fringe Festival
(17:47) RSAC 2024 was busy
(19:05) AI Security at RSAC 2024
(23:03) RSAC Innovation Sandbox
(27:41) CSA AI Summit
(28:43) Interesting AI Talks at RSAC
(30:35) AI conversations at RSAC
(32:32) AI Native Security
(33:02) Data Leakage in AI Security
(30:35) Is AI Security all that different?
(39:26) How to filter vendors selling AI Solutions?
How can AI change a Security Analyst's workflow? Ashish and Caleb caught up with Ely Kahn, VP of Product at SentinelOne, to discuss the revolutionary impact of generative AI on cybersecurity. Ely spoke about the challenges and solutions in integrating AI into cybersecurity operations, highlighting how can simplify complex processes and empowering junior to mid-tier analysts.
Questions asked:
(00:00) Introduction
(03:27) A bit about Ely Kahn
(04:29) Current State of AI in Cybersecurity
(06:45) How AI could impact Cybersecurity User Workflow?
(08:37) What are some of the concerns with such a model?
(14:22) How does it compare to a analyst not using this model?
(21:41) Whats stopping models for going into autopilot?
(30:14) The reasoning for using multiple LLMs
(34:24) ChatGPT vs Anthropic vs Mistral
You can discover more about SentinelOne's Purple AI here!
How is AI transforming traditional approaches to offensive security, pentesting, security posture management, security assessment, and even code security? Caleb and Ashish spoke to Rob Ragan, Principal Technology Strategist at Bishop Fox about how AI is being implemented in the world of offensive security and what the right way is to threat model an LLM.
Questions asked:
(00:00) Introductions
(02:12) A bit about Rob Ragan
(03:33) AI in Security Assessment and Pentesting
(09:15) How is AI impacting pentesting?
(14:50 )Where to start with AI implementation in offensive Security?
(18:19) AI and Static Code Analysis
(21:57) Key components of LLM pentesting
(24:37) Testing whats inside a functional model?
(29:37) Whats the right way to threat model an LLM?
(33:52) Current State of Security Frameworks for LLMs
(43:04) Is AI changing how Red Teamers operate?
(44:46) A bit about Claude 3
(52:23) Where can you connect with Rob
Resources spoken about in this episode:
https://www.pentestmuse.ai/
https://github.com/AbstractEngine/pentest-muse-cli
https://docs.garak.ai/garak/
https://github.com/Azure/PyRIT
https://bishopfox.github.io/llm-testing-findings/
https://www.microsoft.com/en-us/research/project/autogen/
What is the current reality for AI automation in Cybersecurity? Caleb and Ashish spoke to Edward Wu, founder and CEO of Dropzone AI about the current capabilities and limitations of AI technologies, particularly large language models (LLMs), in the cybersecurity domain. From the challenges of achieving true automation to the nuanced process of training AI systems for cyber defense, Edward, Caleb and Ashish shared their insights into the complexities of implementing AI and the importance of precision in AI prompt engineering, the critical role of reference data in AI performance, and how cybersecurity professionals can leverage AI to amplify their defense capabilities without expanding their teams.
Questions asked:
(00:00) Introduction
(05:22) A bit about Edward Wu
(08:31) What is a LLM?
(11:36) Why have we not seen entreprise ready automation in cybersecurity?
(14:37) Distilling the AI noise in the vendor landscape
(18:02) Solving challenges with using AI in enterprise internally
(21:35) How to deal with GenAI Hallucinations?
(27:03) Protecting customer data from a RAG perspective
(29:12) Protecting your own data from being used to train models
(34:47) What skillset is required in team to build own cybersecurity LLMs?
(38:50) Learn how to prompt engineer effectively
There is a complex interplay between innovation and security in the age of GenAI. As the digital landscape evolves at an unprecedented pace, Daniel, Caleb and Ashish share their insights on the challenges and opportunities that come with integrating AI into cybersecurity strategies
Caleb challenges the current trajectory of safety mechanisms in technology and how overregulation may inhibit innovation and the advancement of AI's capabilities. Daniel Miessler, on the other hand, emphasizes the necessity of accepting technological inevitabilities and adapting to live in a world shaped by AI. Together, they explore the potential overreach in AI safety measures and discuss how companies can navigate the fine line between fostering innovation and ensuring security.
Questions asked:
(00:00) Introduction
(03:19) Maintaining Balance of Innovation and Security
(06:21) Uncensored LLM Models
(09:32) Key Considerations for Internal LLM Models
(12:23) Balance between Security and Innovation with GenAI
(16:03) Enterprise risk with GenAI
(25:53) How to address enterprise risk with GenAI?
(28:12) Threat Modelling LLM Models
The podcast currently has 16 episodes available.
353 Listeners
606 Listeners
357 Listeners
931 Listeners
136 Listeners
175 Listeners
182 Listeners
175 Listeners
65 Listeners
197 Listeners
7,133 Listeners
102 Listeners
5,178 Listeners
30 Listeners
327 Listeners