
Sign up to save your podcasts
Or


Node.js performance in production isn’t about a single number — it’s about understanding the signals that drive scaling, stability, and cost. Event Loop Utilization (ELU) sounds simple, but once you add Kafka consumers, Kubernetes autoscaling, streams, and worker threads, things get complicated fast.
In this episode of The Node (& More) Banter, Luca Maraschi and Matteo Collina dig into Node.js metrics through the lens of real-world, event-driven systems. We focus on how ELU behaves in Kafka-heavy workloads, how it correlates with CPU, memory, and I/O, and why choosing the right metrics matters when you’re running Node.js on Kubernetes — especially with architectures like Watt.
We’ll explore:
✅ What Event Loop Utilization really measures — and why it’s a better signal than CPU alone
✅ How ELU behaves for Kafka consumers and stream-based workloads
✅ The relationship between ELU, memory pressure, and I/O saturation
✅ Why Kubernetes autoscalers struggle with Node.js — and where ELU fits in
✅ When worker threads help, and how to reason about ELU across workers
✅ How Kafka client design impacts event loop health and throughput
✅ Why Watt’s architecture aligns naturally with metric-driven scaling in K8s
The big picture?
Metrics shape architecture. If you run Node.js with Kafka on Kubernetes, this episode helps you understand which signals actually reflect load, how to avoid misleading autoscaling decisions, and why Watt was designed around these realities from day one.
By PlatformaticNode.js performance in production isn’t about a single number — it’s about understanding the signals that drive scaling, stability, and cost. Event Loop Utilization (ELU) sounds simple, but once you add Kafka consumers, Kubernetes autoscaling, streams, and worker threads, things get complicated fast.
In this episode of The Node (& More) Banter, Luca Maraschi and Matteo Collina dig into Node.js metrics through the lens of real-world, event-driven systems. We focus on how ELU behaves in Kafka-heavy workloads, how it correlates with CPU, memory, and I/O, and why choosing the right metrics matters when you’re running Node.js on Kubernetes — especially with architectures like Watt.
We’ll explore:
✅ What Event Loop Utilization really measures — and why it’s a better signal than CPU alone
✅ How ELU behaves for Kafka consumers and stream-based workloads
✅ The relationship between ELU, memory pressure, and I/O saturation
✅ Why Kubernetes autoscalers struggle with Node.js — and where ELU fits in
✅ When worker threads help, and how to reason about ELU across workers
✅ How Kafka client design impacts event loop health and throughput
✅ Why Watt’s architecture aligns naturally with metric-driven scaling in K8s
The big picture?
Metrics shape architecture. If you run Node.js with Kafka on Kubernetes, this episode helps you understand which signals actually reflect load, how to avoid misleading autoscaling decisions, and why Watt was designed around these realities from day one.