
Sign up to save your podcasts
Or


🛡️The Future of AI Safety Testing with Bret Kinsella, GM of Fuel iX™ at TELUS Digital
Hello AI Unraveled Listeners,
In today's AI Special Edition,
This episode explores the evolution of AI safety testing, particularly concerning large language models (LLMs). It highlights the limitations of traditional "pass/fail" red teaming and introduces a novel approach called Optimization by PROmpting (OPRO), which enables an LLM to effectively "red team itself." This new methodology focuses on evaluating the Attack Success Rate (ASR) as a distribution, offering more nuanced insights into an AI model's security. The discussion also touches upon the real-world implications for enterprises, especially in regulated industries like finance, energy and healthcare, and how OPRO can aid in demonstrating regulatory compliance and fostering accountability. Ultimately, the guest looks towards the future of AI safety, identifying upcoming challenges and areas for focused research and development.
Watch original full interview video at: https://youtu.be/O-llDoN-iNc?si=Uxll5mEIxREiRjNC
Listen at https://podcasts.apple.com/us/podcast/summarizing-the-future-of-ai-safety-testing/id1684415169?i=1000723478062
Learn More:
By Etienne Noumen4.6
1111 ratings
🛡️The Future of AI Safety Testing with Bret Kinsella, GM of Fuel iX™ at TELUS Digital
Hello AI Unraveled Listeners,
In today's AI Special Edition,
This episode explores the evolution of AI safety testing, particularly concerning large language models (LLMs). It highlights the limitations of traditional "pass/fail" red teaming and introduces a novel approach called Optimization by PROmpting (OPRO), which enables an LLM to effectively "red team itself." This new methodology focuses on evaluating the Attack Success Rate (ASR) as a distribution, offering more nuanced insights into an AI model's security. The discussion also touches upon the real-world implications for enterprises, especially in regulated industries like finance, energy and healthcare, and how OPRO can aid in demonstrating regulatory compliance and fostering accountability. Ultimately, the guest looks towards the future of AI safety, identifying upcoming challenges and areas for focused research and development.
Watch original full interview video at: https://youtu.be/O-llDoN-iNc?si=Uxll5mEIxREiRjNC
Listen at https://podcasts.apple.com/us/podcast/summarizing-the-future-of-ai-safety-testing/id1684415169?i=1000723478062
Learn More:

301 Listeners

341 Listeners

156 Listeners

210 Listeners

301 Listeners

475 Listeners

150 Listeners

209 Listeners

558 Listeners

267 Listeners

104 Listeners

47 Listeners

70 Listeners

59 Listeners

134 Listeners