
Sign up to save your podcasts
Or


When OpenAI discovered they could reclaim 30,000 CPU cores simply by tuning the log-forwarding agent Fluent Bit—disabling a single function that ate ~35 % of one server’s cycles—something large and systemic became undeniable. In this episode, F5's Lori MacVittie, Joel Moses, and observability expert, Chris Hain, break down the hidden cost of telemetry in AI-heavy architectures, why “logging is free” is a myth, and how modern systems demand a new breed of high-speed telemetry planes.
Listen in to learn how Fluent Bit’s file-watching overhead compounded at scale, why profiling matters, and what enterprises can do now to control AI observability costs.
By F5When OpenAI discovered they could reclaim 30,000 CPU cores simply by tuning the log-forwarding agent Fluent Bit—disabling a single function that ate ~35 % of one server’s cycles—something large and systemic became undeniable. In this episode, F5's Lori MacVittie, Joel Moses, and observability expert, Chris Hain, break down the hidden cost of telemetry in AI-heavy architectures, why “logging is free” is a myth, and how modern systems demand a new breed of high-speed telemetry planes.
Listen in to learn how Fluent Bit’s file-watching overhead compounded at scale, why profiling matters, and what enterprises can do now to control AI observability costs.