While much of the AI world chases ever-larger models, Ravin Kumar (Google DeepMind) and his team build across the size spectrum, from billions of parameters down to this week’s release: Gemma 270M, the smallest member yet of the Gemma 3 open-weight family. At just 270 million parameters, a quarter the size of Gemma 1B, it’s designed for speed, efficiency, and fine-tuning.
We explore what makes 270M special, where it fits alongside its billion-parameter siblings, and why you might reach for it in production even if you think “small” means “just for experiments.”
Where 270M fits into the Gemma 3 lineup — and why it exists
On-device use cases where latency, privacy, and efficiency matter
How smaller models open up rapid, targeted fine-tuning
Running multiple models in parallel without heavyweight hardware
Why “small” models might drive the next big wave of AI adoption
If you’ve ever wondered what you’d do with a model this size (or how to squeeze the most out of it) this episode will show you how small can punch far above its weight.
Introducing Gemma 3 270M: The compact model for hyper-efficient AI (Google Developer Blog)Full Model Fine-Tune Guide using Hugging Face TransformersThe Gemma 270M model on HuggingFaceThe Gemma 270M model on OllamaBuilding AI Agents with Gemma 3, a workshop with Ravin and Hugo (Code here)From Images to Agents: Building and Evaluating Multimodal AI Workflows, a workshop with Ravin and Hugo(Code here)Evaluating AI Agents: From Demos to Dependability, an upcoming workshop with Ravin and HugoUpcoming Events on LumaWatch the podcast video on YouTubeHugo's course: Building LLM Applications for Data Scientists and Software Engineers — https://maven.com/s/course/d56067f338 ($600 off early bird discount for November cohort availiable until August 16)