
Sign up to save your podcasts
Or


Send us Fan Mail
If you’re building AI assistants, one question is critical: Have you tried to break them?
In this episode, I explore red-teaming, the practice of testing AI systems with adversarial scenarios to uncover vulnerabilities, biases, or unsafe behaviours before real users do.
Learn why red-teaming matters for security, trust, compliance, and continuous improvement, and discover six practical steps to test your ChatGPT-based assistant.
Want to go deeper on AI?
📖 Buy AI Playbook
📩 Get my weekly LinkedIn newsletter, Human in the Loop.
🎓 Level up with the CPD Accredited AI Playbook Diploma
📞 Let's talk about AI training for your team: digitaltraining.ie or publicsectormarketingpros.com if you are in government or publics sector.
By Joanne SweeneySend us Fan Mail
If you’re building AI assistants, one question is critical: Have you tried to break them?
In this episode, I explore red-teaming, the practice of testing AI systems with adversarial scenarios to uncover vulnerabilities, biases, or unsafe behaviours before real users do.
Learn why red-teaming matters for security, trust, compliance, and continuous improvement, and discover six practical steps to test your ChatGPT-based assistant.
Want to go deeper on AI?
📖 Buy AI Playbook
📩 Get my weekly LinkedIn newsletter, Human in the Loop.
🎓 Level up with the CPD Accredited AI Playbook Diploma
📞 Let's talk about AI training for your team: digitaltraining.ie or publicsectormarketingpros.com if you are in government or publics sector.