A deep dive into AI context windows for educators and administrators. We compare Google Gemini 3 Pro (1-2M tokens), OpenAI GPT-5.2 (16K-196K tokens), and Anthropic Claude Opus/Sonnet 4.6 (200K-1M tokens). Learn what context windows mean for your workflow, how to match the model to your task, and why bigger isn't always better. Show Notes: What is a context window and why it matters; Token math: 200K tokens = ~150,000 words; Model comparison: Gemini 3 Pro, GPT-5.2, Claude Opus 4.6, Claude Sonnet 4.6; Practical advice for educators working with long documents; Context compaction and effective vs advertised windows