
Sign up to save your podcasts
Or


Craig and Rob kick off this episode with a deep dive into Claude's Constitution — the 84-page document Anthropic released to explain how Claude is governed. The document lays out a four-part hierarchy of priorities: be broadly safe, be broadly ethical, follow Anthropic's guidelines, and be genuinely helpful — in that order. Craig walks through the key language, and both hosts zero in on the uncomfortable questions it raises. Who gets to define "broadly ethical"? Whose values count? Craig points out that collectivist and individualist cultures would answer those questions very differently, and Rob raises the example of how privacy has historically carried different social weight in China versus the United States.
They give Anthropic credit for the transparency. Rob notes that he has no idea what governs ChatGPT by comparison, and Craig argues the openness could become a real differentiator for universities evaluating which AI tools to bring in-house. But the Constitution also includes some curious language — the phrase "during the current phase of development" gives Anthropic significant room to evolve these guardrails over time, and a section on emotional support states that Claude should "show that it cares," which both hosts flag as a strikingly anthropomorphic choice of words.
Craig shares a fun aside: he used Claude Code to build a clone of the classic Colossal Cave Adventure game — reframed around understanding large language models — using just a few sentences as a prompt. The game was up and running in about an hour. That kind of capability would have been unthinkable a couple of years ago, and it underscores why the Constitution's language about the "current phase" matters so much.
The big takeaway from the Constitution discussion lands hard: higher ed is on its own when it comes to academic integrity. Anthropic — arguably the most transparent of the major AI companies — has no interest in blocking students from misusing its tools. Rob mentions a new product called Einstein that will watch your Canvas videos, write your discussion posts, reply to classmates, and complete your assignments. All you have to do is hand over your login credentials.
That sets up the episode's second major topic: AI resilience. Rob explains the concept as designing learning outcomes that hold up regardless of what AI can do. If a major portion of a student's grade depends on writing an essay that AI could produce in seconds, that assignment has very little resilience. The shift Rob advocates moves evaluation toward process — asking students for the prompts they used, reflections on how they refined their approach, and demonstrations that they understand what was produced. He shares the example of a colleague whose programming class now requires students to record videos explaining their code rather than just submitting it.
Craig raises the scaling problem. He regularly teaches 90 to 100 undergraduates. Rob suggests that AI itself can help with formative feedback on scaffolding assignments, freeing faculty to focus their grading energy on fewer, higher-stakes assessments. Craig uses an analogy from music: scaffolding assignments are like playing scales — you do them to build toward performance, and they don't need to carry grade weight. Both hosts agree this represents a move away from the grade economy, where students rationally minimize effort because every small assignment is a transaction.
Craig pushes the conversation further by proposing live client projects — or AI-simulated client projects — as a way to create the messiness and ambiguity that real work demands. Rob's initial reaction is skepticism (live client projects are logistically brutal), but he warms to the idea of using AI to simulate clients with realistic fuzziness and scope creep. The broader point: AI could be the lever higher ed needs to fix problems that have been accumulating for decades.
The episode wraps with an update on NotebookLM. Craig walks through the recent changes — more user control over reports, slide decks, flashcards, quizzes, and other outputs in the Studio panel. You can now specify the structure and focus of custom reports rather than relying solely on canned formats. Slide decks can be exported (though editing remains clunky since each slide is essentially an image). Craig's recommendation: if you have a Google account and you work with knowledge in any form, you should be using NotebookLM. Rob notes that Microsoft Copilot has added a similar notebook feature worth exploring, and they float the idea of a future head-to-head comparison episode.
Links referenced in this episode:
Mentioned in this episode:
AI Goes to College Newsletter
By Craig Van Slyke4.8
1616 ratings
Craig and Rob kick off this episode with a deep dive into Claude's Constitution — the 84-page document Anthropic released to explain how Claude is governed. The document lays out a four-part hierarchy of priorities: be broadly safe, be broadly ethical, follow Anthropic's guidelines, and be genuinely helpful — in that order. Craig walks through the key language, and both hosts zero in on the uncomfortable questions it raises. Who gets to define "broadly ethical"? Whose values count? Craig points out that collectivist and individualist cultures would answer those questions very differently, and Rob raises the example of how privacy has historically carried different social weight in China versus the United States.
They give Anthropic credit for the transparency. Rob notes that he has no idea what governs ChatGPT by comparison, and Craig argues the openness could become a real differentiator for universities evaluating which AI tools to bring in-house. But the Constitution also includes some curious language — the phrase "during the current phase of development" gives Anthropic significant room to evolve these guardrails over time, and a section on emotional support states that Claude should "show that it cares," which both hosts flag as a strikingly anthropomorphic choice of words.
Craig shares a fun aside: he used Claude Code to build a clone of the classic Colossal Cave Adventure game — reframed around understanding large language models — using just a few sentences as a prompt. The game was up and running in about an hour. That kind of capability would have been unthinkable a couple of years ago, and it underscores why the Constitution's language about the "current phase" matters so much.
The big takeaway from the Constitution discussion lands hard: higher ed is on its own when it comes to academic integrity. Anthropic — arguably the most transparent of the major AI companies — has no interest in blocking students from misusing its tools. Rob mentions a new product called Einstein that will watch your Canvas videos, write your discussion posts, reply to classmates, and complete your assignments. All you have to do is hand over your login credentials.
That sets up the episode's second major topic: AI resilience. Rob explains the concept as designing learning outcomes that hold up regardless of what AI can do. If a major portion of a student's grade depends on writing an essay that AI could produce in seconds, that assignment has very little resilience. The shift Rob advocates moves evaluation toward process — asking students for the prompts they used, reflections on how they refined their approach, and demonstrations that they understand what was produced. He shares the example of a colleague whose programming class now requires students to record videos explaining their code rather than just submitting it.
Craig raises the scaling problem. He regularly teaches 90 to 100 undergraduates. Rob suggests that AI itself can help with formative feedback on scaffolding assignments, freeing faculty to focus their grading energy on fewer, higher-stakes assessments. Craig uses an analogy from music: scaffolding assignments are like playing scales — you do them to build toward performance, and they don't need to carry grade weight. Both hosts agree this represents a move away from the grade economy, where students rationally minimize effort because every small assignment is a transaction.
Craig pushes the conversation further by proposing live client projects — or AI-simulated client projects — as a way to create the messiness and ambiguity that real work demands. Rob's initial reaction is skepticism (live client projects are logistically brutal), but he warms to the idea of using AI to simulate clients with realistic fuzziness and scope creep. The broader point: AI could be the lever higher ed needs to fix problems that have been accumulating for decades.
The episode wraps with an update on NotebookLM. Craig walks through the recent changes — more user control over reports, slide decks, flashcards, quizzes, and other outputs in the Studio panel. You can now specify the structure and focus of custom reports rather than relying solely on canned formats. Slide decks can be exported (though editing remains clunky since each slide is essentially an image). Craig's recommendation: if you have a Google account and you work with knowledge in any form, you should be using NotebookLM. Rob notes that Microsoft Copilot has added a similar notebook feature worth exploring, and they float the idea of a future head-to-head comparison episode.
Links referenced in this episode:
Mentioned in this episode:
AI Goes to College Newsletter