Pop Goes the Stack

The Impact of Inference: Performance


Listen Later

Traditional performance meant deterministic response times. Identical inputs produced near-identical execution times. Optimizations reduced latency, but variance was minimal. Insert AI inference and performance engineering has been flipped upside down. Latency depends on model size, tokenization, batching strategies, and generation settings. Identical inputs may produce different response times. The new dimension of performance is variance—not just how fast the system responds, but how response times distribute across requests, how many tokens per second are processed, and how efficient each response is relative to cost.


In this episode of Pop Goes the Stack, Lori MacVittie, Joel Moses, and special guest Nina Forsyth dive into the impact of AI inference on measuring performance. It's time to rethink performance observability, focus on infrastructure optimization, agent-to-agent interactions, and robust measurement techniques. Listen in to learn how traditional approaches must evolve to manage this multi-dimensional puzzle.

...more
View all episodesView all episodes
Download on the App Store

Pop Goes the StackBy F5