
Sign up to save your podcasts
Or


🙏 What does it take to design AI that earns - and deserves - our trust?
“Prompt engineering is a bug” - Claudio Pinhanez
In this episode, Samraj Matharu speaks with Claudio Pinhanez, Principal Scientist at IBM Research Brazil, about building AI systems that are transparent, consistent, and human-centred. With 40 years of experience across computer science and design, Claudio shares lessons from the early days of AI, explores how language models shape user trust, and reflects on why humility and honesty should guide the next era of intelligent systems.
A grounded and forward-looking discussion on how to make AI worthy of human confidence.
EPISODE HIGHLIGHTS
0:00 ➤ Introducing Claudio Pinhanez (IBM Research Brazil, HCI + NLP)
3:00 ➤ Lessons from 40 years in AI — from expert systems to LLMs
10:30 ➤ Designing for transparency and trust
15:40 ➤ “Prompt engineering is a bug” — improving consistency
20:10 ➤ Human-in-the-loop systems and responsible design
30:00 ➤ Platforms, adoption, and the real-world impact of AI
40:00 ➤ Biological futures: new ways to think about intelligence
47:00 ➤ Closing thoughts & audience question
KEY QUESTIONS ANSWERED
➤ What makes AI transparent and trustworthy?
➤ Why does interface design matter as much as model design?
➤ How can humans and machines collaborate effectively?
➤ What can we learn from decades of AI evolution?
➤ Do you think this technology should be used for any important decision in your life?
SUBSCRIBE to The AI Lyceum™ for more deep dives into AI, ethics, and the future of work.
CONNECT
🔗 Linktree → https://linktr.ee/TheAILyceum
🌐 Website → https://theailyceum.com
📸 Instagram → https://instagram.com/TheAILyceum
💼 LinkedIn Company Page → https://www.linkedin.com/company/108295902
👥 Join the LinkedIn Community → https://www.linkedin.com/groups/13322112
🎓 20% OFF University of Oxford Executive Programmes → https://theailyceum.com
#AI #AITransparency #TrustInAI #ResponsibleAI #EthicalAI #HumanCentricAI #ClaudioPinhanez #IBMResearch #MachineLearning #AIResearch #FutureOfAI #Podcast
By Samraj Matharu🙏 What does it take to design AI that earns - and deserves - our trust?
“Prompt engineering is a bug” - Claudio Pinhanez
In this episode, Samraj Matharu speaks with Claudio Pinhanez, Principal Scientist at IBM Research Brazil, about building AI systems that are transparent, consistent, and human-centred. With 40 years of experience across computer science and design, Claudio shares lessons from the early days of AI, explores how language models shape user trust, and reflects on why humility and honesty should guide the next era of intelligent systems.
A grounded and forward-looking discussion on how to make AI worthy of human confidence.
EPISODE HIGHLIGHTS
0:00 ➤ Introducing Claudio Pinhanez (IBM Research Brazil, HCI + NLP)
3:00 ➤ Lessons from 40 years in AI — from expert systems to LLMs
10:30 ➤ Designing for transparency and trust
15:40 ➤ “Prompt engineering is a bug” — improving consistency
20:10 ➤ Human-in-the-loop systems and responsible design
30:00 ➤ Platforms, adoption, and the real-world impact of AI
40:00 ➤ Biological futures: new ways to think about intelligence
47:00 ➤ Closing thoughts & audience question
KEY QUESTIONS ANSWERED
➤ What makes AI transparent and trustworthy?
➤ Why does interface design matter as much as model design?
➤ How can humans and machines collaborate effectively?
➤ What can we learn from decades of AI evolution?
➤ Do you think this technology should be used for any important decision in your life?
SUBSCRIBE to The AI Lyceum™ for more deep dives into AI, ethics, and the future of work.
CONNECT
🔗 Linktree → https://linktr.ee/TheAILyceum
🌐 Website → https://theailyceum.com
📸 Instagram → https://instagram.com/TheAILyceum
💼 LinkedIn Company Page → https://www.linkedin.com/company/108295902
👥 Join the LinkedIn Community → https://www.linkedin.com/groups/13322112
🎓 20% OFF University of Oxford Executive Programmes → https://theailyceum.com
#AI #AITransparency #TrustInAI #ResponsibleAI #EthicalAI #HumanCentricAI #ClaudioPinhanez #IBMResearch #MachineLearning #AIResearch #FutureOfAI #Podcast