The AI Lyceum

#21 – Understanding AI Ethics: Trust and Safety [Savneet Singh, AI Ethicist]


Listen Later

'The machine is there to support you — not to replace your judgment. You are the one in control' — Savneet Singh

As AI becomes more human-like, trust becomes harder to define — and more critical to get right.

In this episode, Samraj speaks with Savneet Singh, who doesn't speak in her capacity as trust and safety lead at a top tech company, but as Visiting Lecturer at Emory University, about what it really means to trust AI systems that increasingly sound, remember, and respond like humans.

Savneet breaks down why trust in AI isn't about how human it feels — it's about predictability, transparency, and alignment with human values. They explore why AI must remain a co-pilot, not an autopilot, why labeling AI-generated content matters, and how misinformation spreads not through conspiracy, but through everyday digital behaviour.

The conversation tackles 'AI psychosis' emotional attachment to non-conscious systems, the ethics of AI companions, and why accountability must sit with developers, deployers, and users — not the machine. This is a conversation about responsibility, boundaries, and keeping humans firmly in control as AI becomes more powerful.

WHAT YOU'LL LEARN

→ What trust actually means in the context of AI

→ Why human-in-the-loop design is non-negotiable

→ The difference between misinformation and disinformation

→ Why AI companions risk emotional substitution

→ How "AI psychosis" emerges through prolonged interaction

→ Why labelling AI content builds trust

→ Where accountability must sit when AI goes wrong

EPISODE HIGHLIGHTS

0:00 ➤ Intro

1:50 ➤ What does "trust" really mean in AI?

4:41 ➤ Transparency, guardrails, and human-in-the-loop

7:22 ➤ Trust vs confidence

9:33 ➤ AI-generated journalism and fabricated facts

11:15 ➤ Misinformation, deception, and human responsibility

14:49 ➤ AI psychosis and emotional attachment

21:07 ➤ Losing clarity about the human–AI relationship

24:27 ➤ Supportive tools vs emotional substitution

28:49 ➤ Guardrails, free will, and ethics by design

30:20 ➤ "Digital littering" and everyday ethics

33:54 ➤ Why AI literacy matters for all ages

36:26 ➤ One practical guardrail every business should use

39:51 ➤ Trusting AI when you doubt your own judgment

42:26 ➤ Teaching children how to use AI responsibly

46:55 ➤ Why AI agents still feel risky

52:22 ➤ The accountability gap

54:00 ➤ Final message: humans must stay in control

🔗 LISTEN, WATCH & CONNECT

🌐 Join the 1K+ Community: https://linktr.ee/theailyceum

💻 Website: https://theailyceum.com

▶️ YouTube: https://www.youtube.com/@The.AI.Lyceum

🎧 Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza

🎧 Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167

🎧 Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum

ABOUT THE AI LYCEUM

The AI Lyceum is a global community exploring AI, ethics, philosophy, and human potential — hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University.

#AI #philosophy #trust #google

...more
View all episodesView all episodes
Download on the App Store

The AI LyceumBy Samraj Matharu