
Sign up to save your podcasts
Or


This week, we have two guests on the podcast. We’re joined by Gene Flenady, Lecturer in Philosophy at Monash University, whose research concerns the structure and social conditions of human rational agency, including the implications of new technologies for meaningful work and tertiary pedagogy. Our second guest is Robert Sparrow, Professor of Philosophy at Monash University. His research interests include political philosophy and the ethics of science and technology with an eye towards real-world applications.
Flenady and Sparrow argue that GenAI systems are "constitutively irresponsible" because their algorithms are designed to predict what "sounds good" - not necessarily what is true or contextually appropriate. Our guests suggest that it's unfair to expect learners themselves to determine when AI is wrong or misleading. Doing so puts students in an impossible position and gets in the way of building meaningful relationships with their human teachers and the pursuit of lifelong learning.
Learn more about Drs. Flenady and Sparrow’s work in their article: “Cut the bullshit: why GenAI systems are neither collaborators nor tutors”
Other materials referenced in this episode include:
Frankfurt, H. G. (2005). On bullshit. Princeton University Press.
By Columbia University Center for Teaching and Learning5
2020 ratings
This week, we have two guests on the podcast. We’re joined by Gene Flenady, Lecturer in Philosophy at Monash University, whose research concerns the structure and social conditions of human rational agency, including the implications of new technologies for meaningful work and tertiary pedagogy. Our second guest is Robert Sparrow, Professor of Philosophy at Monash University. His research interests include political philosophy and the ethics of science and technology with an eye towards real-world applications.
Flenady and Sparrow argue that GenAI systems are "constitutively irresponsible" because their algorithms are designed to predict what "sounds good" - not necessarily what is true or contextually appropriate. Our guests suggest that it's unfair to expect learners themselves to determine when AI is wrong or misleading. Doing so puts students in an impossible position and gets in the way of building meaningful relationships with their human teachers and the pursuit of lifelong learning.
Learn more about Drs. Flenady and Sparrow’s work in their article: “Cut the bullshit: why GenAI systems are neither collaborators nor tutors”
Other materials referenced in this episode include:
Frankfurt, H. G. (2005). On bullshit. Princeton University Press.

91,058 Listeners

43,862 Listeners

38,498 Listeners

43,604 Listeners

371 Listeners

14,630 Listeners

112,920 Listeners

56,673 Listeners

52 Listeners

16,062 Listeners

16 Listeners

1 Listeners

20 Listeners

84 Listeners

2 Listeners