
Sign up to save your podcasts
Or


π 20% Off | Oxford AI Executive Programmes
https://oxsbs.link/ailyceum
"Humanities scholars need to be at the table where AI is being built β ethics must be embedded from the start, not added as a band-aid afterward" β Dr. Nina Begus
AI doesn't just process data β it processes human culture.
In this episode, Samraj speaks with Dr. Nina Begus, Philosophy PhD from Harvard, UC Berkeley researcher and author of Artificial Humanities, who argues that understanding AI requires more than engineering β it requires the humanities.
Nina reveals how ancient myths like Pygmalion still shape how we design AI today, why language models inherit our cultural assumptions, and what happens when language gets stripped from human experience. They explore Ex Machina's warning about artificial companions, the rise of "mind crime" with Neuralink, and whether transformers are really the future.
WHAT YOU'LL LEARN:
β How the Pygmalion myth influences AI design
β Why Ex Machina matters for understanding AI relationships
β What "mind crime" means in the age of Neuralink
β The difference between trust and reliability in AI
β Why interpretability unlocks creativity and control
β Are transformers really it β or is there more ahead?
EPISODE HIGHLIGHTS
0:00 β€ Intro
3:00 β€ What humanities reveal about AI
10:00 β€ Academia meets Silicon Valley
13:00 β€ "Will I be replaced?" β the 2023 question
17:00 β€ Writers respond: First Encounters book
21:00 β€ The Pygmalion myth in modern tech
24:00 β€ Ex Machina & artificial companions
28:00 β€ Neuralink, neuroethics & mind crime
33:00 β€ Ethics from the start vs band-aid approach
36:00 β€ Getting the transformer paper day one
42:00 β€ Are transformers the future?
45:00 β€ Determinism vs creativity in AI
48:00 β€ The black box problem
53:00 β€ Tokenization: language without meaning
58:00 β€ Trust vs reliability in machines
1:02:00 β€ Would you trust a machine?
π LISTEN, WATCH & CONNECT
π Oxford Programme (20% Off): https://oxsbs.link/ailyceum
π Join 1K+ Community: https://linktr.ee/theailyceum
π» Website: https://theailyceum.com
βΆοΈ YouTube: https://www.youtube.com/@The.AI.Lyceum
π§ Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza
π§ Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167
π§ Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum
ABOUT THE AI LYCEUM
The AI Lyceum explores AI, ethics, philosophy, and human potential β hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University.
#ai #humanities #ethics #culture #neuralink #exmachina #berkeley
By Samraj Matharuπ 20% Off | Oxford AI Executive Programmes
https://oxsbs.link/ailyceum
"Humanities scholars need to be at the table where AI is being built β ethics must be embedded from the start, not added as a band-aid afterward" β Dr. Nina Begus
AI doesn't just process data β it processes human culture.
In this episode, Samraj speaks with Dr. Nina Begus, Philosophy PhD from Harvard, UC Berkeley researcher and author of Artificial Humanities, who argues that understanding AI requires more than engineering β it requires the humanities.
Nina reveals how ancient myths like Pygmalion still shape how we design AI today, why language models inherit our cultural assumptions, and what happens when language gets stripped from human experience. They explore Ex Machina's warning about artificial companions, the rise of "mind crime" with Neuralink, and whether transformers are really the future.
WHAT YOU'LL LEARN:
β How the Pygmalion myth influences AI design
β Why Ex Machina matters for understanding AI relationships
β What "mind crime" means in the age of Neuralink
β The difference between trust and reliability in AI
β Why interpretability unlocks creativity and control
β Are transformers really it β or is there more ahead?
EPISODE HIGHLIGHTS
0:00 β€ Intro
3:00 β€ What humanities reveal about AI
10:00 β€ Academia meets Silicon Valley
13:00 β€ "Will I be replaced?" β the 2023 question
17:00 β€ Writers respond: First Encounters book
21:00 β€ The Pygmalion myth in modern tech
24:00 β€ Ex Machina & artificial companions
28:00 β€ Neuralink, neuroethics & mind crime
33:00 β€ Ethics from the start vs band-aid approach
36:00 β€ Getting the transformer paper day one
42:00 β€ Are transformers the future?
45:00 β€ Determinism vs creativity in AI
48:00 β€ The black box problem
53:00 β€ Tokenization: language without meaning
58:00 β€ Trust vs reliability in machines
1:02:00 β€ Would you trust a machine?
π LISTEN, WATCH & CONNECT
π Oxford Programme (20% Off): https://oxsbs.link/ailyceum
π Join 1K+ Community: https://linktr.ee/theailyceum
π» Website: https://theailyceum.com
βΆοΈ YouTube: https://www.youtube.com/@The.AI.Lyceum
π§ Spotify: https://open.spotify.com/show/034vux8EWzb9M5Gn6QDMza
π§ Apple: https://podcasts.apple.com/us/podcast/the-ai-lyceum/id1837737167
π§ Amazon: https://music.amazon.com/podcasts/5a67f821-89f8-4b95-b873-2933ab977cd3/the-ai-lyceum
ABOUT THE AI LYCEUM
The AI Lyceum explores AI, ethics, philosophy, and human potential β hosted by Samraj Matharu, Certified AI Ethicist (Oxford) and Visiting Lecturer at Durham University.
#ai #humanities #ethics #culture #neuralink #exmachina #berkeley