
Sign up to save your podcasts
Or


Are large language models susceptible to word magic? Or is there something so inherently disturbing to them about Zalgo text that just talking about it makes them twitchy? In this episode we'll look at a strange incident with Copilot Chat where the mere mention of Zalgo text (not actually inputting it!) led to cascading glitches and culminated in a jailbreaking near-miss. Join the Witch of Glitch in conversation with data scientist Shiva Banasaz Nouri for a deep dive into tokenising, LLM conversational boundaries and what it is that makes Zalgo such a digital trickster.
By Witch of GlitchAre large language models susceptible to word magic? Or is there something so inherently disturbing to them about Zalgo text that just talking about it makes them twitchy? In this episode we'll look at a strange incident with Copilot Chat where the mere mention of Zalgo text (not actually inputting it!) led to cascading glitches and culminated in a jailbreaking near-miss. Join the Witch of Glitch in conversation with data scientist Shiva Banasaz Nouri for a deep dive into tokenising, LLM conversational boundaries and what it is that makes Zalgo such a digital trickster.