
Sign up to save your podcasts
Or


In this episode of The English 101 Experiment, Mason and Monica are joined by Kaia Layson, one of Mason’s students, to discuss her researched paper on ethics and artificial intelligence. In the course of discussing Kaia's project and experience with The English 101 Experiment approach, the conversation meanders through big and unsettled questions: What does it mean to talk about consciousness in relation to AI? How do we grapple with the speed of technological change and the sense that we are always “catching up”? And what responsibilities do humans carry when we supply AI systems with ethical frameworks they themselves must design and interpret?
Rather than offering tidy answers about AI, this episode approaches ethics as an ongoing, human process shaped by technological change. Together, we explore how student-driven research into AI can open space for nuance, uncertainty, and genuine intellectual engagement—and why ethics in an age of artificial intelligence may function less like a fixed rulebook and more like a swinging compass, constantly recalibrated as conditions, capabilities, and consequences evolve.
By Monica MankinIn this episode of The English 101 Experiment, Mason and Monica are joined by Kaia Layson, one of Mason’s students, to discuss her researched paper on ethics and artificial intelligence. In the course of discussing Kaia's project and experience with The English 101 Experiment approach, the conversation meanders through big and unsettled questions: What does it mean to talk about consciousness in relation to AI? How do we grapple with the speed of technological change and the sense that we are always “catching up”? And what responsibilities do humans carry when we supply AI systems with ethical frameworks they themselves must design and interpret?
Rather than offering tidy answers about AI, this episode approaches ethics as an ongoing, human process shaped by technological change. Together, we explore how student-driven research into AI can open space for nuance, uncertainty, and genuine intellectual engagement—and why ethics in an age of artificial intelligence may function less like a fixed rulebook and more like a swinging compass, constantly recalibrated as conditions, capabilities, and consequences evolve.