
Sign up to save your podcasts
Or


Swapping the engine while it runs sounds risky - and in production video systems, it is.
Hardware acceleration often gets positioned as “faster encoding,” but that’s not what makes engineering teams hesitate. The real challenge is introducing a new compute model without breaking workflows that already carry years of integrations, monitoring, and operational assumptions. As NAB approaches and the industry talks density and efficiency, the more important question is this: how do you evolve a platform without increasing fragility?
Leo Nieto from NETINT sits down with Dominique Vosters from Scalstrm to unpack what actually triggers the shift away from CPU-only scaling. The answer is less about technology hype and more about cost per stream across the full workflow, from transcoding through packaging and delivery. We explore why teams hesitate, what they are most concerned about breaking, and why live streaming raises the stakes compared to VOD.
We also get practical about architecture. Dominique explains how hardware acceleration fits into a broader system, where VPUs handle encoding while software layers manage resilience, synchronization, and recovery. From the outside, workflows often remain consistent. The real changes happen inside the system, where compute is relocated rather than the architecture being rebuilt.
Finally, we walk through a low-risk rollout approach: parallel environments, staging validation, and gradual channel-by-channel migration instead of big bang transitions.
If you are planning hardware-accelerated transcoding, VPU-based encoding, or broader video workflow optimization, this conversation will help you improve efficiency while protecting production stability.
Key topics covered
• why CPU-based video workflows become unsustainable primarily due to cost pressure across transcoding, packaging, and delivery
• why hesitation around hardware acceleration comes from fear of disrupting production systems, not skepticism about the technology
• differences between VOD and live streaming workflows, and why live environments require more cautious rollout strategies
• how hardware acceleration fits into existing architectures, with software layers handling resilience, synchronization, and recovery
• what actually changes when acceleration is introduced versus what remains stable for operators and workflows
• the concept of “swapping the engine while it runs” by keeping input and output behavior consistent while moving compute inside the system
• when orchestration layers add value and how that depends on platform maturity and scale
• practical deployment strategies including parallel environments, staging validation, and gradual channel-by-channel migration
Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.
By NETINT TechnologiesSwapping the engine while it runs sounds risky - and in production video systems, it is.
Hardware acceleration often gets positioned as “faster encoding,” but that’s not what makes engineering teams hesitate. The real challenge is introducing a new compute model without breaking workflows that already carry years of integrations, monitoring, and operational assumptions. As NAB approaches and the industry talks density and efficiency, the more important question is this: how do you evolve a platform without increasing fragility?
Leo Nieto from NETINT sits down with Dominique Vosters from Scalstrm to unpack what actually triggers the shift away from CPU-only scaling. The answer is less about technology hype and more about cost per stream across the full workflow, from transcoding through packaging and delivery. We explore why teams hesitate, what they are most concerned about breaking, and why live streaming raises the stakes compared to VOD.
We also get practical about architecture. Dominique explains how hardware acceleration fits into a broader system, where VPUs handle encoding while software layers manage resilience, synchronization, and recovery. From the outside, workflows often remain consistent. The real changes happen inside the system, where compute is relocated rather than the architecture being rebuilt.
Finally, we walk through a low-risk rollout approach: parallel environments, staging validation, and gradual channel-by-channel migration instead of big bang transitions.
If you are planning hardware-accelerated transcoding, VPU-based encoding, or broader video workflow optimization, this conversation will help you improve efficiency while protecting production stability.
Key topics covered
• why CPU-based video workflows become unsustainable primarily due to cost pressure across transcoding, packaging, and delivery
• why hesitation around hardware acceleration comes from fear of disrupting production systems, not skepticism about the technology
• differences between VOD and live streaming workflows, and why live environments require more cautious rollout strategies
• how hardware acceleration fits into existing architectures, with software layers handling resilience, synchronization, and recovery
• what actually changes when acceleration is introduced versus what remains stable for operators and workflows
• the concept of “swapping the engine while it runs” by keeping input and output behavior consistent while moving compute inside the system
• when orchestration layers add value and how that depends on platform maturity and scale
• practical deployment strategies including parallel environments, staging validation, and gradual channel-by-channel migration
Stay tuned for more in-depth insights on video technology, trends, and practical applications. Subscribe to Voices of Video: Inside the Tech for exclusive, hands-on knowledge from the experts. For more resources, visit Voices of Video.