AI Goes to College, Episode 33: Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AI
Higher education is drowning in accessibility deadlines, grappling with what 81,000 AI interviews reveal about how people actually use these tools, and watching the academic publishing system creak under new pressures. In this episode, Craig and Rob dig into all three, with practical advice, a few uncomfortable truths, and their usual mix of optimism and healthy skepticism.
The Accessibility Crunch Is Here (and AI Can Help)
The episode opens with a problem that's top of mind for faculty everywhere: the April 24 federal deadline requiring public-facing digital content to meet WCAG accessibility guidelines. Universities have been scrambling, and many of the contracted tools designed to help have been, as Craig diplomatically puts it, hit and miss.
Craig shares a concrete example from his own workflow. He took three image-heavy slide decks from his Principles of Information Systems course and handed them to Claude Cowork with a simple instruction: add alt text for all the images. Within about 30 minutes, the job was done. The accuracy? Roughly 75 to 80 percent. A handful of images needed corrections, but instead of writing alt text for 40 or 50 images from scratch, he only had to fix six or eight. Rob tried something similar with Microsoft Copilot on a keynote presentation he gave at the SAIS conference in Asheville; two images, 30 seconds, done.
Rob makes the important point that accessibility isn't just a PowerPoint problem. It extends to whiteboard files, videos, and essentially everything faculty communicate digitally. The burden is real, and it lands on faculty who are already overwhelmed by the changes AI is bringing to their professional lives. Craig adds a note of personal sensitivity here; his wife has a profound hearing disability, which makes these issues more than abstract compliance for him.
The larger takeaway? When you hit one of these friction points in your work, try AI. It won't always solve the problem, but it often will, and the time savings can be substantial.
What 81,000 Interviews Tell Us About How People Actually Use AI
Link: https://www.anthropic.com/features/81k-interviews
Craig's article: https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic
The conversation shifts to Anthropic's large-scale qualitative study, where Claude was used to conduct and analyze 81,000 interviews about how people use AI tools. Rob, who has spent considerable time doing qualitative research the traditional way (36 interview transcripts with families, a labor-intensive process), finds the scale almost hard to believe. Craig wrote a separate article about this study for the AI Goes to College newsletter.
The phrase that catches both hosts' attention is one from the report: "the light and the shade are tangled together." It captures the tension between excitement about AI's possibilities and anxiety about what those possibilities mean for how people work, learn, and think. Craig connects this to a concept from technology studies: this is not technological determinism. The outcomes aren't dictated by the tools themselves. They emerge from the sociotechnical space where human choices and technological capabilities intersect.
Rob observes that most current AI use cases still amount to doing what we've always done, just faster. The real transformation will come when people start imagining entirely new approaches (he draws an analogy to cloud computing, which started as a backup solution and eventually reshaped how people interact with technology in ways nobody initially anticipated).
One quote from the Anthropic study lands hard. A freelance software engineer in Pakistan says: "I want to learn skills, but learning deeply is of no use. Ultimately I can just use AI." Craig points out that if a working professional thinks this way, the implications for students who may not yet appreciate the long-term value of deep learning are sobering. Rob agrees but pushes back slightly: people who lean too far into this mindset will eventually hit a wall where they lack the critical thinking skills to know when or why AI has gotten something wrong.
The hosts converge on what's becoming a running theme for the podcast: higher education's central task is helping students understand the long-term value of cognitive engagement, because without that understanding, the default will always be to let AI handle it.
Academics Need to Wake Up: 10 Theses on a Shifting Landscape
Link: https://substack.com/home/post/p-189705626
The second major discussion centers on Alexander Kustoff's Substack article, "Academics Need to Wake Up on AI: 10 Theses for Folks Who Haven't Noticed the Ground Shifting Under Their Feet." Rob sees it as a useful prompt for conversations the research community needs to have. Craig appreciates the ambition but pushes back on some of the claims.
Take thesis number one: AI can already do social science research better than most professors. Craig's reaction is nuanced. The claim is probably technically true if "most" is read literally, since many professors don't publish much (Rob notes the median number of publications for business school professors may be as low as one). But the implication that AI can replace skilled researchers? Not yet. Craig estimates that a knowledgeable researcher can use AI to cut research production time by about three-quarters, but that knowledge is the key ingredient; without research skill, you'll just produce publishable garbage faster.
Rob raises something interesting: colleagues who are brilliant thinkers but never thrived in research because they didn't enjoy writing may now have a path to contribute. AI could genuinely democratize parts of the research process. Craig extends this point to data analysis; tools like Cowork can run Python and R analyses without expensive specialized software, which matters enormously for under-resourced institutions and researchers in developing countries.
The conversation turns to the strain AI is putting on the peer review system. More submissions (many of them better written thanks to AI) are flooding journals, but finding reviewers was already difficult. Craig, speaking from his role as a journal editor, argues that well-trained AI could do a better job reviewing than roughly half of current human reviewers. Rob agrees but emphasizes that journal leaders need to come together and define norms for what's acceptable. Right now, the rules are either nonexistent or unrealistically restrictive ("just don't use AI for anything"), which creates the same kind of confusion faculty have imposed on students with inconsistent classroom policies.
One of the most provocative moments comes when Craig reads a quote from the Kustoff article: "I don't envision a research assistant role in my workflow anymore. What I want from collaborators is original thinking, domain expertise, and intellectual challenge. This is a genuine loss for the traditional apprenticeship model, and I don't have a clean answer for how to replace it." Both hosts take this seriously. Craig argues that senior scholars will need to accept some suboptimal results in the short term to continue mentoring the next generation. Rob suggests the apprenticeship model isn't dying; it's transforming. The mentorship shifts from teaching students how to do tasks to teaching them how to direct AI tools and critically evaluate what those tools produce.
Craig closes with a characteristically honest observation: senior scholars get stuck in their ways of thinking, and one of the real values of working with early-career doctoral students is the occasional moment when their unformed, messy thinking reveals a perspective that nobody in the room had considered. That's worth protecting.
AI-Generated Lesson Plans and the Bloom's Taxonomy Problem
Link: https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/
The final segment covers a paper by four researchers from UMass Amherst, "Civic Education in the Age of AI: Should We Trust AI-Generated Lesson Plans?" The study found that roughly 90 percent of AI-generated lesson plans hit only the lower levels of Bloom's taxonomy (remembering, understanding) rather than the higher-order thinking skills like analyzing, evaluating, and creating.
Craig's first reaction was that the prompts used in the study were terrible. But he acknowledges the researchers had a reason: they were mimicking how most teachers would actually prompt. And that's the real finding. The problem isn't that AI can't produce sophisticated lesson plans; the problem is that untrained users produce unsophisticated prompts, and the output reflects the input. Rob agrees and broadens the point: if even a fraction of teachers are prompting this way, that's affecting a lot of students.
Craig shares a personal anecdote from his one year as a high school teacher. He diligently wrote lesson plans; a veteran teacher (whom he describes as one of the best he'd ever seen) simply copied his plans to satisfy an administrative checkbox. The experienced teacher didn't need detailed plans because she could read the room and adapt in real time. Some lesson planning, Craig suggests, falls into a compliance category where the quality of the plan matters less than the quality of the teaching.
But the bigger message is one both hosts keep returning to: we have to teach people how to use these tools well. Craig suspects that even a slightly more complex prompt ("address this level of Bloom's taxonomy and make sure you include demographic diversity") would produce dramatically better lesson plans.
Rob makes a final observation that resonates beyond lesson planning. People who spend a lot of time thinking about AI (like Rob and Craig) can easily forget that most people don't. Understanding what AI use looks like for someone without deep expertise, and then helping to lift them up, is the real work ahead.
Craig's response? Maybe the strategy should be seeding the field with AI evangelists, a small number of engaged opinion leaders who help others one conversation at a time, rather than trying to train everyone through top-down institutional programs. That's how innovations actually spread.
A Meta-Moment: Who Wrote This, Really?
In a brief but revealing aside, Craig mentions that his Substack article about the Anthropic study was entirely generated and posted by an agentic AI workflow using Claude Code and Opus 4.6, built on his custom "write like Craig" skill. He asks Rob to guess the accuracy. Rob says 75 percent. Craig confirms. The question lingers: if AI can write in your voice with 75 percent accuracy and post it autonomously, who's really the author? Craig leaves that for the listener to decide.
Key Takeaways
AI is a practical solution for the accessibility crunch. With the April 24 WCAG deadline looming, tools like Claude Cowork and Microsoft Copilot can generate alt text for images at roughly 75 to 80 percent accuracy, dramatically reducing the manual burden on faculty.
"The light and the shade are tangled together." Anthropic's 81,000-interview study reinforces that AI's benefits and risks aren't separable. Higher education's job is to help students navigate both, not pretend one side doesn't exist.
AI adoption follows a predictable pattern. First we use new technology to do old things faster. The real transformation comes when we start imagining fundamentally new approaches. Higher ed is still mostly in phase one.
The prompt is the bottleneck, not the tool. AI-generated lesson plans that hit only lower-order Bloom's taxonomy levels aren't evidence that AI can't do better. They're evidence that untrained users produce unsophisticated prompts.
Academic publishing is under real strain. More submissions, better surface-level writing, reviewer shortages, and undefined norms for AI use are all converging. Journal leaders need to establish clear, workable standards.
The apprenticeship model is transforming, not dying. Mentoring doctoral students shifts from teaching them to do tasks toward teaching them to direct AI tools and critically evaluate the output. Senior scholars need to stay open to messy, unexpected thinking from early-career researchers.
Seed the field with opinion leaders. Rather than top-down institutional training programs, Craig argues for cultivating AI evangelists who spread knowledge one conversation at a time; that's how innovations actually diffuse.
Links
Anthropic's 81,000 interviews: https://www.anthropic.com/features/81k-interviews
Craig's article: https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic
Academics need to wake up on AI: https://substack.com/home/post/p-189705626
AI generated lesson plans: https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/
Companies/Products mentioned in this episode:
- Claude Cowork
- Microsoft Copilot
- Anthropic
- University of Central Oklahoma
- UMass Amherst
Mentioned in this episode:
AI Goes to College Newsletter