
Sign up to save your podcasts
Or


🛡️The Future of AI Safety Testing with Bret Kinsella, GM of Fuel iX™ at TELUS Digital
Hello AI Unraveled Listeners,
In today's AI Special Edition,
This episode explores the evolution of AI safety testing, particularly concerning large language models (LLMs). It highlights the limitations of traditional "pass/fail" red teaming and introduces a novel approach called Optimization by PROmpting (OPRO), which enables an LLM to effectively "red team itself." This new methodology focuses on evaluating the Attack Success Rate (ASR) as a distribution, offering more nuanced insights into an AI model's security. The discussion also touches upon the real-world implications for enterprises, especially in regulated industries like finance, energy and healthcare, and how OPRO can aid in demonstrating regulatory compliance and fostering accountability. Ultimately, the guest looks towards the future of AI safety, identifying upcoming challenges and areas for focused research and development.
Watch original full interview video at: https://youtu.be/O-llDoN-iNc?si=Uxll5mEIxREiRjNC
Listen at https://podcasts.apple.com/us/podcast/summarizing-the-future-of-ai-safety-testing/id1684415169?i=1000723478062
Learn More:
By Etienne Noumen4.7
1313 ratings
🛡️The Future of AI Safety Testing with Bret Kinsella, GM of Fuel iX™ at TELUS Digital
Hello AI Unraveled Listeners,
In today's AI Special Edition,
This episode explores the evolution of AI safety testing, particularly concerning large language models (LLMs). It highlights the limitations of traditional "pass/fail" red teaming and introduces a novel approach called Optimization by PROmpting (OPRO), which enables an LLM to effectively "red team itself." This new methodology focuses on evaluating the Attack Success Rate (ASR) as a distribution, offering more nuanced insights into an AI model's security. The discussion also touches upon the real-world implications for enterprises, especially in regulated industries like finance, energy and healthcare, and how OPRO can aid in demonstrating regulatory compliance and fostering accountability. Ultimately, the guest looks towards the future of AI safety, identifying upcoming challenges and areas for focused research and development.
Watch original full interview video at: https://youtu.be/O-llDoN-iNc?si=Uxll5mEIxREiRjNC
Listen at https://podcasts.apple.com/us/podcast/summarizing-the-future-of-ai-safety-testing/id1684415169?i=1000723478062
Learn More:

1,639 Listeners

334 Listeners

225 Listeners

207 Listeners

498 Listeners

197 Listeners

158 Listeners

154 Listeners

228 Listeners

616 Listeners

109 Listeners

53 Listeners

173 Listeners

55 Listeners

96 Listeners