
Sign up to save your podcasts
Or


'We should improve our critical thinking' — Chiara Gallese
As AI systems move from experimentation to infrastructure, governance becomes the real test.
In Episode 22 of The AI Lyceum, Samraj speaks with Dr Chiara Gallese — Philosophy PhD, Adjunct Professor of Digital Ethics at Collegio Internazionale Ca’ Foscari, Researcher at Tilburg Institute for Law, Technology, and Society (TILT), and Academic Expert in the European Commission’s AI Transparency Code of Practice Working Groups. A lawyer and privacy consultant for multinationals and banks since 2015, she currently focuses her studies on the legal aspects of artificial intelligence, the ethics of data use, and data protection.
She's also a TedX speaker.
They explore what fairness means once AI systems are deployed at scale, where bias truly enters AI (data, model, or deployment), and how transparency obligations under the EU AI Act shape real institutional practice.
Chiara explains the difference between stochastic and deterministic systems, why ignoring bias is not just unethical but poor engineering, and why governance must extend beyond frameworks into everyday use.
The conversation also examines emotional attachment to generative systems, disclosure dilemmas, and why strengthening human judgment may be just as important as improving the models themselves.
This is a conversation about responsibility, constitutional values, transparency, and governing intelligence in the real world.
Episode Highlights
0:00 ➤ Intro / Guest Welcome
2:40 ➤ Does AI ethics improve business outcomes?
8:55 ➤ Inside the EU AI Transparency Code of Practice
15:30 ➤ Stochastic vs deterministic systems
22:10 ➤ Where bias enters AI systems
30:45 ➤ Emotional intelligence and attachment
38:20 ➤ Disclosure, labelling, and stigma
45:10 ➤ Critical thinking in the AI era
50:30 ➤ Final reflections
Key Questions Explored
➤ What does fairness mean in AI governance?
➤ Where does bias originate in AI systems?
➤ Can AI emotional intelligence be trusted?
➤ Should AI-generated content always be disclosed?
➤ Is governance about frameworks or lived practice?
➤ What must humans preserve as AI advances?
Listen on:
YouTube – https://www.youtube.com/@The.AI.Lyceum
Spotify – https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza
Apple – https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167
Amazon – https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum
Website – https://theailyceum.com
Hosted by Samraj Matharu — Certified AI Ethicist (Oxford) | Visiting Lecturer (Durham)
#AI #AIAct #AIGovernance #DigitalEthics #Bias #Fairness #Transparency #ResponsibleAI #CriticalThinking
By Samraj Matharu'We should improve our critical thinking' — Chiara Gallese
As AI systems move from experimentation to infrastructure, governance becomes the real test.
In Episode 22 of The AI Lyceum, Samraj speaks with Dr Chiara Gallese — Philosophy PhD, Adjunct Professor of Digital Ethics at Collegio Internazionale Ca’ Foscari, Researcher at Tilburg Institute for Law, Technology, and Society (TILT), and Academic Expert in the European Commission’s AI Transparency Code of Practice Working Groups. A lawyer and privacy consultant for multinationals and banks since 2015, she currently focuses her studies on the legal aspects of artificial intelligence, the ethics of data use, and data protection.
She's also a TedX speaker.
They explore what fairness means once AI systems are deployed at scale, where bias truly enters AI (data, model, or deployment), and how transparency obligations under the EU AI Act shape real institutional practice.
Chiara explains the difference between stochastic and deterministic systems, why ignoring bias is not just unethical but poor engineering, and why governance must extend beyond frameworks into everyday use.
The conversation also examines emotional attachment to generative systems, disclosure dilemmas, and why strengthening human judgment may be just as important as improving the models themselves.
This is a conversation about responsibility, constitutional values, transparency, and governing intelligence in the real world.
Episode Highlights
0:00 ➤ Intro / Guest Welcome
2:40 ➤ Does AI ethics improve business outcomes?
8:55 ➤ Inside the EU AI Transparency Code of Practice
15:30 ➤ Stochastic vs deterministic systems
22:10 ➤ Where bias enters AI systems
30:45 ➤ Emotional intelligence and attachment
38:20 ➤ Disclosure, labelling, and stigma
45:10 ➤ Critical thinking in the AI era
50:30 ➤ Final reflections
Key Questions Explored
➤ What does fairness mean in AI governance?
➤ Where does bias originate in AI systems?
➤ Can AI emotional intelligence be trusted?
➤ Should AI-generated content always be disclosed?
➤ Is governance about frameworks or lived practice?
➤ What must humans preserve as AI advances?
Listen on:
YouTube – https://www.youtube.com/@The.AI.Lyceum
Spotify – https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza
Apple – https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167
Amazon – https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum
Website – https://theailyceum.com
Hosted by Samraj Matharu — Certified AI Ethicist (Oxford) | Visiting Lecturer (Durham)
#AI #AIAct #AIGovernance #DigitalEthics #Bias #Fairness #Transparency #ResponsibleAI #CriticalThinking