
Sign up to save your podcasts
Or


Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/681
The team explores the ethical implications of teaching AI jailbreaking techniques and conducting red team testing on large language models, balancing educational value against potential misuse. They dive into personal experiments with bypassing AI safeguards, revealing both creative workarounds and robust protections in modern systems.
TAKEAWAYS
• Debate on whether demonstrating AI vulnerabilities is responsible education or potentially dangerous knowledge sharing
• Psychological impact on security professionals who regularly simulate malicious behaviors to test AI safety
• Real examples of attempts to "jailbreak" AI systems through fantasy storytelling and other creative prompts
• Legal gray areas in AI security testing that require dedicated legal support for organizations
• Personal experiences with testing AI guardrails on different models and their varying levels of protection
• Future prediction that Microsoft's per-user licensing model may shift to consumption-based as AI agents replace human tasks
• Growth observations about Microsoft's Business Applications division reaching approximately $8 billion
• Discussion of how M365 Copilot is transforming productivity, particularly for analyzing sales calls and customer interactions
Check out this episode for more deep dives into AI safety, security, and the future of technology in business.
Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption
Support the show
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith
By Mark Smith [nz365guy]5
2020 ratings
Get featured on the show by leaving us a Voice Mail: https://bit.ly/MIPVM
FULL SHOW NOTES
https://www.microsoftinnovationpodcast.com/681
The team explores the ethical implications of teaching AI jailbreaking techniques and conducting red team testing on large language models, balancing educational value against potential misuse. They dive into personal experiments with bypassing AI safeguards, revealing both creative workarounds and robust protections in modern systems.
TAKEAWAYS
• Debate on whether demonstrating AI vulnerabilities is responsible education or potentially dangerous knowledge sharing
• Psychological impact on security professionals who regularly simulate malicious behaviors to test AI safety
• Real examples of attempts to "jailbreak" AI systems through fantasy storytelling and other creative prompts
• Legal gray areas in AI security testing that require dedicated legal support for organizations
• Personal experiences with testing AI guardrails on different models and their varying levels of protection
• Future prediction that Microsoft's per-user licensing model may shift to consumption-based as AI agents replace human tasks
• Growth observations about Microsoft's Business Applications division reaching approximately $8 billion
• Discussion of how M365 Copilot is transforming productivity, particularly for analyzing sales calls and customer interactions
Check out this episode for more deep dives into AI safety, security, and the future of technology in business.
Microsoft 365 Copilot Adoption is a Microsoft Press book for leaders and consultants. It shows how to identify high-value use cases, set guardrails, enable champions, and measure impact, so Copilot sticks. Practical frameworks, checklists, and metrics you can use this month. Get the book: https://bit.ly/CopilotAdoption
Support the show
If you want to get in touch with me, you can message me here on Linkedin.
Thanks for listening 🚀 - Mark Smith

1,453 Listeners

3,146 Listeners

1,823 Listeners

1,036 Listeners

339 Listeners

268 Listeners

5,475 Listeners

665 Listeners

5 Listeners

560 Listeners

104 Listeners

70 Listeners

1 Listeners

59 Listeners

18 Listeners