
Sign up to save your podcasts
Or
In this episode we are chatting to Sven Freudenfeld, Business Development Officer at Kontron about the recent launch of the Symkloud Web Cloud Platform server to support the growing shift towards mobile cloud and address the key cloud server challenges facing organisations, data centres, infrastructure providers and IPTV content providers.
For related articles and podcasts visit http://www.itproportal.com
Tell us more about the Symkloud platform, how have you designed it to support these needs?
The trend in the market is that a lot of the applications are moving away from the handset and finding their place in the core network infrastructure. So the challenge to launch new mobile devices is primarily battery life and also processing capability on the handset itself. A lot of the applications are moving towards a cloud infrastructure where they are managing computer resources in the back end of the network. So that could be mobile gaming or it could be streaming video applications and transcoding to offload to the mobile device. That mobile device could be pretty much anything from a tablet to an iPhone you name it. There are other mobile devices connected to the network now and one direction where we utilize the mobile device is from machine to machine interaction towards a cloud infrastructure. The difference between the tablet and the Smartphone is that with a tablet transmitting machine to machine data is really is transmitting only data, or in some cases streaming video. With smartphones that includes the location or GPS information and lots of other information as well. So, the end-to-end device is just a gateway to transmit data towards the cloud. The processing takes place in the cloud itself.
What we realized is that in the current environment using standard hardware, for example into Intel CPU processors is if you have a massive CPU as the computer engine in the infrastructure the scalability and the cost are very challenging to accomplish new deployments. In other words if I have multicore platforms the only way I can actually separate multiple tenants in the mobile cloud because we could use the term “multi-tenant environment” is by adding a software layer on the platform itself and do virtualization. When additional software is added that typically brings the challenge of maintaining the software revisions in order for it to be a secure virtual core. The other challenge is the cost of course. We need to add on top an extra burden and it is not easy to manage so the cost is pretty high on average and in manageability and flexibility it is really limited.
So we approached it by coming up with a platform approach where we have more scalability by separating multiple smaller CPUs with dedicated memory and a dedicated hard drive and also a dedicated memory for transcoding and actually separating it into a 2u where we have up to 9cpus for web applications and up to 18 for transcoding applications. So they are more flexible and we can actually scale much better. The other advantage for computing approach is by eliminating some of the power consumption and challenges. So, again coming back to the example of having a massive CPU multicore with visual layer, the web platform is usually sharing resources, memory and hard drive so this means I cannot just shut down if I underuse my CPU. The CPU still has to be fully active and I can actually optimize the load on the platform itself and in low peak times I can actually consolidate the workload into multiples of smaller CPUs. Even shut down an underutilized CPU so it is much more flexible and more scalable and it also helps on the capital operating expenses low.
With Web 3.0 and mobile cloud gathering attention in the market were there a list of challenges that you were trying to almost tick off while you were developing the SymKloud platform?
Yes we looked at the market conditions first. We have another product line, which is coming soon in the form of an end-to-end platform. If you look into the market demand for end-to-end productivity coming up, the traffic model within the network will have a different shape because we are using machine to machine interaction and the packet sizes are more numerous within interactions but in a smaller packet size so that makes it much more efficient to use smaller scale CPUs, but a lot of them.
The challenge with this model was that, more CPUs means you need to have a load balancer for functionality in the platform itself. This is in order to optimize the traffic load and the workload for each individual CPU. So we broke into the platform in order carry out check pointing to see what the utilization of each of the CPUs is in the platform and redistribute the traffic accordingly. The other advantage of balancing the functionality is that if you have say, streaming video for example you can actually do controlled balancing. This means that if you have an app for insertion you can actually have much more flexibility of including this into a platform by a dedicated note to the 2u platform for only managing the app insertion and have the streaming content on a different note.
So that is the advantage of having balancing and switching in a platform. The other was when we talked to hosting service providers or customers we discovered that the upfront costs from the data centre in the traditional way is outrageous meaning we currently obscure what they are because the upfront costs for switching, cooling and rack space are the most critical ones. You can imagine if you have to invest in switching gear up front on the top of the rack, switching for let’s say 25,000 hours and you are not really utilizing other ports then you are going to increase costs pretty significantly. In our approach we have a better scheme by actually adding up multiple clusters without the need of a rack switch so again we are reducing costs and providing more scalability on that.
It is important to many businesses to streamline their services does this help them to achieve that in a new way do you think?
Correct, because if you go onto other cloud services or hosting services sites, the infrastructure costs have a fine margin. If I am selling services to end users the infrastructure costs are a big burden for me to actually generate revenue. So if I can eliminate some portion or minimize that, through reducing power consumption, costs for cooling the data centre, etc. then I am able to generate revenue much quicker.
Well the spec mentions the app ready modular system that you have built this around. Explain more about that?
The application already is a bit of a different approach. In the past we have been coming from the embedded computer technology space and we provide it in modules based on Intel and other processors and platforms but the key here is to bring cloud providers and other equipment providers to revenue quicker. So what we are providing is free integration of some of the platforms of the operating system with some manageability to our customers so they do not have to develop an interface to be actually able to use it and provide the cloud developer an interface to actually provision it. So these we call application ready because we have already the capability built in for provisioning of the cloud application development itself. It is basically the framework to get off the ground much quicker.
Looking at the industry as a whole are organizations really prepared for the changing needs that all businesses are going to face regarding data storage moving forward?
The challenges coming right now in the market really relate to the need to manage the modern technology acquisitions alongside the lack of increased profit margin. It is very competitive and our customers need to come up with a more innovative way of increasing the margin on an average revenue business portal. This issue has been out there for a while but it is moving quicker when you see the forecast from all the analysts in terms of the traffic demand for mobile devices on top of the machine-to-machine productivity to the cloud.
In this episode we are chatting to Sven Freudenfeld, Business Development Officer at Kontron about the recent launch of the Symkloud Web Cloud Platform server to support the growing shift towards mobile cloud and address the key cloud server challenges facing organisations, data centres, infrastructure providers and IPTV content providers.
For related articles and podcasts visit http://www.itproportal.com
Tell us more about the Symkloud platform, how have you designed it to support these needs?
The trend in the market is that a lot of the applications are moving away from the handset and finding their place in the core network infrastructure. So the challenge to launch new mobile devices is primarily battery life and also processing capability on the handset itself. A lot of the applications are moving towards a cloud infrastructure where they are managing computer resources in the back end of the network. So that could be mobile gaming or it could be streaming video applications and transcoding to offload to the mobile device. That mobile device could be pretty much anything from a tablet to an iPhone you name it. There are other mobile devices connected to the network now and one direction where we utilize the mobile device is from machine to machine interaction towards a cloud infrastructure. The difference between the tablet and the Smartphone is that with a tablet transmitting machine to machine data is really is transmitting only data, or in some cases streaming video. With smartphones that includes the location or GPS information and lots of other information as well. So, the end-to-end device is just a gateway to transmit data towards the cloud. The processing takes place in the cloud itself.
What we realized is that in the current environment using standard hardware, for example into Intel CPU processors is if you have a massive CPU as the computer engine in the infrastructure the scalability and the cost are very challenging to accomplish new deployments. In other words if I have multicore platforms the only way I can actually separate multiple tenants in the mobile cloud because we could use the term “multi-tenant environment” is by adding a software layer on the platform itself and do virtualization. When additional software is added that typically brings the challenge of maintaining the software revisions in order for it to be a secure virtual core. The other challenge is the cost of course. We need to add on top an extra burden and it is not easy to manage so the cost is pretty high on average and in manageability and flexibility it is really limited.
So we approached it by coming up with a platform approach where we have more scalability by separating multiple smaller CPUs with dedicated memory and a dedicated hard drive and also a dedicated memory for transcoding and actually separating it into a 2u where we have up to 9cpus for web applications and up to 18 for transcoding applications. So they are more flexible and we can actually scale much better. The other advantage for computing approach is by eliminating some of the power consumption and challenges. So, again coming back to the example of having a massive CPU multicore with visual layer, the web platform is usually sharing resources, memory and hard drive so this means I cannot just shut down if I underuse my CPU. The CPU still has to be fully active and I can actually optimize the load on the platform itself and in low peak times I can actually consolidate the workload into multiples of smaller CPUs. Even shut down an underutilized CPU so it is much more flexible and more scalable and it also helps on the capital operating expenses low.
With Web 3.0 and mobile cloud gathering attention in the market were there a list of challenges that you were trying to almost tick off while you were developing the SymKloud platform?
Yes we looked at the market conditions first. We have another product line, which is coming soon in the form of an end-to-end platform. If you look into the market demand for end-to-end productivity coming up, the traffic model within the network will have a different shape because we are using machine to machine interaction and the packet sizes are more numerous within interactions but in a smaller packet size so that makes it much more efficient to use smaller scale CPUs, but a lot of them.
The challenge with this model was that, more CPUs means you need to have a load balancer for functionality in the platform itself. This is in order to optimize the traffic load and the workload for each individual CPU. So we broke into the platform in order carry out check pointing to see what the utilization of each of the CPUs is in the platform and redistribute the traffic accordingly. The other advantage of balancing the functionality is that if you have say, streaming video for example you can actually do controlled balancing. This means that if you have an app for insertion you can actually have much more flexibility of including this into a platform by a dedicated note to the 2u platform for only managing the app insertion and have the streaming content on a different note.
So that is the advantage of having balancing and switching in a platform. The other was when we talked to hosting service providers or customers we discovered that the upfront costs from the data centre in the traditional way is outrageous meaning we currently obscure what they are because the upfront costs for switching, cooling and rack space are the most critical ones. You can imagine if you have to invest in switching gear up front on the top of the rack, switching for let’s say 25,000 hours and you are not really utilizing other ports then you are going to increase costs pretty significantly. In our approach we have a better scheme by actually adding up multiple clusters without the need of a rack switch so again we are reducing costs and providing more scalability on that.
It is important to many businesses to streamline their services does this help them to achieve that in a new way do you think?
Correct, because if you go onto other cloud services or hosting services sites, the infrastructure costs have a fine margin. If I am selling services to end users the infrastructure costs are a big burden for me to actually generate revenue. So if I can eliminate some portion or minimize that, through reducing power consumption, costs for cooling the data centre, etc. then I am able to generate revenue much quicker.
Well the spec mentions the app ready modular system that you have built this around. Explain more about that?
The application already is a bit of a different approach. In the past we have been coming from the embedded computer technology space and we provide it in modules based on Intel and other processors and platforms but the key here is to bring cloud providers and other equipment providers to revenue quicker. So what we are providing is free integration of some of the platforms of the operating system with some manageability to our customers so they do not have to develop an interface to be actually able to use it and provide the cloud developer an interface to actually provision it. So these we call application ready because we have already the capability built in for provisioning of the cloud application development itself. It is basically the framework to get off the ground much quicker.
Looking at the industry as a whole are organizations really prepared for the changing needs that all businesses are going to face regarding data storage moving forward?
The challenges coming right now in the market really relate to the need to manage the modern technology acquisitions alongside the lack of increased profit margin. It is very competitive and our customers need to come up with a more innovative way of increasing the margin on an average revenue business portal. This issue has been out there for a while but it is moving quicker when you see the forecast from all the analysts in terms of the traffic demand for mobile devices on top of the machine-to-machine productivity to the cloud.