
Sign up to save your podcasts
Or


In this episode of Relational AI Diaries, co-hosts Dr Peter Dean and Dr Agnieszka Piotrowska are joined by Prof Chris Headland, Head of Games at the University of Staffordshire, for a rigorous and wide-ranging discussion on AI ethics, emergence, and the growing cultural mythology surrounding large language models. From agentic systems and “AI psychosis” to techno-transference, hallucination, governance, and the risks of speed-to-market development, the conversation explores what these systems actually are — and what they are not.
Is AI conscious? Does it have intent? Why does it feel seductive? And what does it mean to say that AI optimizes for believability rather than truth? Drawing on engineering, psychoanalysis, philosophy, and game design, this episode argues for something unfashionable but essential: pragmatic skepticism. Use the tools — but don’t be charmed by them.
By peterjdeanIn this episode of Relational AI Diaries, co-hosts Dr Peter Dean and Dr Agnieszka Piotrowska are joined by Prof Chris Headland, Head of Games at the University of Staffordshire, for a rigorous and wide-ranging discussion on AI ethics, emergence, and the growing cultural mythology surrounding large language models. From agentic systems and “AI psychosis” to techno-transference, hallucination, governance, and the risks of speed-to-market development, the conversation explores what these systems actually are — and what they are not.
Is AI conscious? Does it have intent? Why does it feel seductive? And what does it mean to say that AI optimizes for believability rather than truth? Drawing on engineering, psychoanalysis, philosophy, and game design, this episode argues for something unfashionable but essential: pragmatic skepticism. Use the tools — but don’t be charmed by them.