
Sign up to save your podcasts
Or
In the second episode of ChAI Chat, host Jomar Gacoscos, an information security professional, explores AI risks and safety concerns in an era of rapid technological advancement. He discusses how AI can be manipulated, citing examples like the "Do Anything Now" (DAN) prompt, which bypassed ChatGPT’s safeguards, and a Chevrolet dealership chatbot tricked into making heavily discounted, and supposedly legally binding agreements. The episode also highlights AI hallucinations in OpenAI’s Whisper transcription tool, which has been found to fabricate medical transcriptions with potentially dangerous consequences. Gacoscos emphasizes the importance of learning from real-world case studies and plans to feature guest experts to discuss AI security challenges and mitigation strategies.
In the second episode of ChAI Chat, host Jomar Gacoscos, an information security professional, explores AI risks and safety concerns in an era of rapid technological advancement. He discusses how AI can be manipulated, citing examples like the "Do Anything Now" (DAN) prompt, which bypassed ChatGPT’s safeguards, and a Chevrolet dealership chatbot tricked into making heavily discounted, and supposedly legally binding agreements. The episode also highlights AI hallucinations in OpenAI’s Whisper transcription tool, which has been found to fabricate medical transcriptions with potentially dangerous consequences. Gacoscos emphasizes the importance of learning from real-world case studies and plans to feature guest experts to discuss AI security challenges and mitigation strategies.