Sign up to save your podcastsEmail addressPasswordRegisterOrContinue with GoogleAlready have an account? Log in here.
FAQs about Deploy Securely:How many episodes does Deploy Securely have?The podcast currently has 19 episodes available.
April 17, 2024Compliance and AI - 3 quick observationsHere are the top 3 things I'm seeing:1️⃣ Auditors don’t (yet) have strong opinions on how to deploy AI securely2️⃣ Enforcement is here, just not evenly distributed.3️⃣ Integrating AI-specific requirements with existing security, privacy, and compliance ones isn’t going to be easyWant to see a full post? Check out the Deploy Securely blog: https://blog.stackaware.com/p/ai-governance-compliance-auditors-enforcement...more5minPlay
December 13, 2023Code Llama: 5-minute risk analysisSomeone asked me what the unintended training and data retention risk with Meta's code Llama is.My answer:the same as every other model you host and operate on your own.And, all other things being equal, it's lower than that of anything operating -as-a-Service (-aaS) like ChatGPT or Claude.Check out this video for deeper dive?Or read the full post on Deploy Securely: https://blog.stackaware.com/p/code-llama-self-hosted-model-unintended-trainingWant more AI security resources? Check out: https://products.stackaware.com/...more5minPlay
December 04, 20234th party AI processing and retention riskSo you have your AI policy in place and are carefully controlling access to new apps as they launch, but then......you realize your already-approved tools are themselves starting to leverage 4th party AI vendors.Welcome to the modern digital economy.Things are complex and getting even more so.That's why you need to incorporate 4th party risk into your security policies, procedures, and overall AI governance program.Check out the full post with the Asana and Databricks examples I mentioned: https://blog.stackaware.com/p/ai-supply-chain-processing-retention-risk...more7minPlay
November 27, 2023Sensitive Data GenerationI’m worried about data leakage from LLMs, but probably not why you think.While unintended training is a real risk that can’t be ignored, something else is going to be a much more serious problem: sensitive data generation (SDG).A recent paper (https://arxiv.org/pdf/2310.07298v1.pdf) shows how LLMs can infer huge amounts of personal information from seemingly innocuous comments on Reddit.And this phenomenon will have huge impacts for:- Material nonpublic information- Executive moves- Trade secretsand the ability to keep them confidential.Check out the full post in Deploy Securely for a breakdown: https://blog.stackaware.com/p/sensitive-data-generation...more7minPlay
November 13, 2023Artificial Intelligence Risk Scoring System (AIRSS) - Part 2What does "security" even mean with AI?You'll need to define things like:BUSINESS REQUIREMENTS- What type of output is expected?- What format should it be?- What is the use case?SECURITY REQUIREMENTS- Who is allowed to see which outputs?- Under which conditions?Having these things spelled out is a hard requirement before you can start talking about the risk of a given AI model.Continuing the build-out of the Artificial Intelligence Risk Scoring System (AIRSS), I tackle these issues - and more - in the latest issue of Deploy Securely.Check out the written post as well: https://blog.stackaware.com/p/artificial-intelligence-risk-scoring-system-p2Here is the pURL for the model I mentioned: pkg:generic/gpt-3.5-turbo@0613?ft=80Z1hDhg...more11minPlay
November 07, 2023Artificial Intelligence Risk Scoring System (AIRSS) - Part 1AI cyber risk management needs a new paradigm.Logging CVEs and using CVSS just does not make sense for AI models, and won't cut it going forward.That's why I launched the Artificial Intelligence Risk Scoring System (AIRSS).A quantitative approach to measuring cybersecurity risk from artificial intelligence systems, I am building it in public to help refine and improve the approach.Check out the first post in a series where I lay out my methodology: https://blog.stackaware.com/p/artificial-intelligence-risk-scoring-system-p1...more15minPlay
October 30, 2023How should we track AI vulnerabilities?The Cybersecurity and Infrastructure Security Agency (CISA) released a post earlier this year saying the AI engineering community should use something like the existing CVE system for tracking vulnerabilities in AI models.Unfortunately, this is a pretty bad recommendation.That's because:- CVEs already create a lot of noise- AI systems are non-deterministic- So things would just get worseIn this episode, I dive into these issues and discuss the way ahead.Check out the full blog post: https://blog.stackaware.com/p/how-should-we-identify-ai-vulnerabilities...more8minPlay
October 23, 2023Generative AI and Unintended Training🔐 Think self-hosting your AI models is more secure?It might be...or not!In this video, we dig into the topic of AI model security and introduce the concept of "unintended training."▶️ Key Highlights:- The myth that self-hosting AI models is necessarily better for security- Decision factors when choosing between SaaS vs. IaaS- Defining "Unintentional Training" and its implicationsRead more about unintended training and AI Security: https://blog.stackaware.com/p/unintended-trainingAnd for a deep dive on the security benefits of SaaS, check out this post:https://blog.stackaware.com/p/declaring-a-truce-on-saas-securityHit that subscribe button for more cutting-edge AI security insights! ✅...more8minPlay
October 23, 2023Who should make cyber risk management decisions?It's a tougher challenge than many security folks talk about.Who should have the final say about whether to accept, mitigate, transfer, or avoid risk?- Cybersecurity?- Compliance?- Legal?The answer:None of them.Check out this episode of Deploy Securely to learn who should.Or read the original blog post here: https://blog.stackaware.com/p/who-should-make-cyber-risk-management...more15minPlay
FAQs about Deploy Securely:How many episodes does Deploy Securely have?The podcast currently has 19 episodes available.