
Sign up to save your podcasts
Or
In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex.
Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime.
Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roadmap.
Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs:
ScaleOps Adds Predictive Horizontal Scaling, Smart Placement
ScaleOps Dynamically Right-Sizes Containers at Runtime
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
4.3
3131 ratings
In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex.
Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime.
Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roadmap.
Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs:
ScaleOps Adds Predictive Horizontal Scaling, Smart Placement
ScaleOps Dynamically Right-Sizes Containers at Runtime
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
377 Listeners
266 Listeners
285 Listeners
154 Listeners
41 Listeners
9 Listeners
585 Listeners
628 Listeners
3 Listeners
434 Listeners
4 Listeners
200 Listeners
181 Listeners
190 Listeners
63 Listeners
47 Listeners
63 Listeners
52 Listeners