In this episode, Priten speaks with Tina Austin, an AI educator and professor of biomedical ethics who helps institutions rethink assessment and teaching in the age of generative AI. As ChatGPT disrupted the assumption that polished output reflects student thinking, Tina moved beyond academic integrity concerns to ask a deeper question: what if we redesigned learning around when and how thinking happens, rather than what gets produced at the end?
Key Takeaways:
- Bloom's Taxonomy breaks down because AI collapses the distinction between output and thinking. The old model assumed a polished answer proved learning; AI now makes that assumption untenable, forcing educators to make thinking visible through process rather than relying on products as evidence.
- UnBlooms treats learning as recursive, not hierarchical—and starts with intentional friction. Rather than inverting Bloom's or banning AI, Tina's model requires students to show their initial thinking, engage critically with AI output, and revise with judgment; the shape shifts from a ladder to a spiral where learners don't return to the same place twice.
- Different disciplines protect different kinds of thinking, and AI policy should honor that variation. STEM faculty worry about problem-solving integrity; humanities faculty about voice and nuance; effective AI policy emerges from asking each discipline what thinking they need to safeguard, not from imposing one rule across all fields.
- The most productive AI use in classrooms builds critical skepticism, not efficiency. Having students critique AI-generated lecture summaries or debate where AI diverges from expert knowledge creates genuine engagement; offloading listening itself (via AI note-takers) removes a central learning function and trades visibility into thinking for marginal convenience.
- Higher education's crisis is not new, but AI has made it visible and urgent. Tenure and research incentives protect teaching practices that no longer serve; the opportunity now is to ask honestly whether courses are helping students develop judgment and prepare them for genuine uncertainty—not to add AI on top of unchanged structures.
Tina Austin is an AI educator, researcher, and policy advisor working at the intersection of education, healthcare, science, and emerging technology. Recognized as one of ASU+GSV's Leading Women in AI (2025), featured by OpenAI Academy, and interviewed by CNN, she is one of the most prominent voices guiding institutions toward responsible, human-centered AI adoption. She has led courses at UCLA, USC, CSU, and Caltech spanning critical thinking with AI, biomedical research, regenerative medicine, and ethics.