There is no arguing the cloud and streaming microservices radically changed the way operators engineer their streaming services. Traditional .
There is no arguing the cloud and streaming microservices radically changed the way operators engineer their streaming services. Traditional broadcasting involves lots of hardware racked and stacked in data centres because those companies service viewers only within their geographic reach. Streaming, though, is a loosely-federated collection of different technologies which can be installed and operated from anywhere in the world. This makes streaming technology ideal to service viewers wherever they wish to watch content.
Although this approach to delivering content represents the future of how people watch video, it also introduces a host of new challenges, such as scale. Massive numbers of simultaneous users can, unlike in traditional broadcast, overwhelm resources if there isn’t enough capacity. That’s what makes the cloud so important to streaming.
Streaming product developers and operations engineers understand the need for the cloud which is why most of the stack is already there: encoders, transcoders, DRM servers, caches, monitoring probes, etc. Everything that can be virtualised has been so that delivery capacity is dynamic. The stack can scale up and down depending how many users are requesting content and at what bitrate--obviously, this is something impossible to achieve with hardware, on the same timeline, and with the same elasticity.
The cloud is the only feasible way streaming operators can meet regional and global demand for content without spending unpredictably high amounts of money on physical infrastructure. In that sense, the next step towards true scalability and redundancy are streaming microservices.
The cloud has evolved
Although the streaming video tech stack is an evolution of the broadcast tech stack, it too is evolving because of how the cloud is changing. When streaming operators first adopted the cloud as their primary infrastructure, it was all about virtualised instances. What they realised was that it was much easier (and cheaper) to manage, maintain, and monitor virtualised infrastructure. For example, the number of server instances could be increased programmatically in relation to demand. That’s a stark contrast to using physical servers which need hands to rack-and-stack.
The problem is, virtualization doesn’t provide the kind of scale that streaming really needs. Spinning up a new server instance still requires quite a bit of time, and in some cases, a reservation with the cloud provider (there are only so many instances available for specific configurations).
The cloud is less about virtualised servers and more about containers.
These are slimmed down operating systems tailored to run a specific application. Using Docker and Kubernetes, streaming engineers are able to quickly and easily scale technology components within the workflow. With DevOps and CI/CD pipelines, streaming technology teams have so much more control over the elasticity of their infrastructure and the deployment of the technologies in the stack. Still, scale is a challenge. Yes, containers are better at scale than virtualised instances, but there are resource constraints.
Think about it like this: a bare metal server used by a cloud service provider may be able to host 100 virtual machines, but it can host 1000 containers.
Why you should use containers and streaming microservices
Many streaming operators, if not most, have embraced DevOps and containers to develop and deploy their technology stack. However, this has only worked up to a point, with the problem being that the tech stack is growing in complexity. A combination of device proliferation coupled with non-standard protocols and codecs means a fragmented workflow where software is being continually expanded to ensure it can handle the complexity. This means bigger, fatter containers which is exactly opposite of the v...