Share The On-Premise IT Roundtable Podcast
Share to email
Share to Facebook
Share to X
By Tech Field Day
The podcast currently has 248 episodes available.
Most people envision AI as a cool and orderly datacenter activity, but this technology will soon be everywhere. This episode of the On-Premise IT podcast contrasts the AI-based greenhouses of Nature Fresh Farms, as presented by guest Keith Bradley at AI Field Day, with the massive GPU-bound infrastructure many people imagine. Allyson Klein, Frederic Van Haren, and Stephen Foskett attended AI Field Day and were intrigued by the ways AI can process data from cameras and other sensors in a greenhouse environment.
Gestalt IT and Tech Field Day are now part of The Futurum Group.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: Real World AI Looks a Lot Different From the Movies
The development of AI networking is moving forward and Ethernet is taking a prime role in how workloads will communicate. In this episode, Tom Hollingsworth is joined by Drew Conry-Murray and Jordan Martin as well as J Metz, the chair of the Ultra Ethernet Consortium, to discuss the progress being made by the UEC to develop Ethernet to meet the needs of AI. They discuss the roadmap for adoption of technologies as well as the drivers for the additions to the protocol and how people can get involved.
Gestalt IT and Tech Field Day are now part of The Futurum Group.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: The Future of AI Needs Ethernet
Generative AI is becoming a key tool for software developers, and businesses are embracing it as well. This episode of the On-Premise IT podcast brings Paul Nashawaty of The Futurum Group, data expert Karen Lopez, and Stephen Foskett together to discuss how AI is impacting application development. Generative AI is incredibly compelling, rapidly producing credible output. that it’s hard to put a stop to it. Rather than trying to stand in the way, companies are looking for better quality tools, with data privacy and compliance capabilities to fend off the negatives that can arise from AI-generated content. AI can also help with tasks like documentation and testing that are less popular and more problematic, and these can improve overall code quality as well.
Gestalt IT and Tech Field Day are now part of The Futurum Group.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: Generative AI is Developing Applications
Modern workloads are overloading hardware systems, and the CPUs in the market today aren’t up to the task. In this episode of On-Premise IT Podcast recorded on the premises of the Cloud Field Day event in California, host Stephen Foskett is joined by Thomas LaRock, Shala Warner, and Jim Czuprynski from the IT world, to talk about innovation in hardware. The discussion addresses the burning question of whether investing in more specialized hardware will solve the problem. Hear the panel explain how hardware innovation is intertwined with software innovation, and how the two components come together to power cutting-edge workloads.
Gestalt IT and Tech Field Day are now part of The Futurum Group.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: Hardware Can’t Keep Up With Software
The IT world is obsessed with AI but the desire to put AI into every product creates confusion and uncertainty. In this episode of the On-Premise Podcast, Tom Hollingsworth is joined by Zoë Rose and Dominik Pickhardt to discuss why everyone is so excited about AI. They also focus on issues with opaque algorithms and how AI can actually be useful in helping professionals with their daily work.
Gestalt IT and Tech Field Day are now part of The Futurum Group.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: We Need AI to Enable Everything
Platform engineering has been happening for a long time, but today’s implication is quite different. This episode of the On-Premise IT podcast brings platform engineering expert Michael Levan, industry analyst Steven Dickens, and host Stephen Foskett to consider what platform engineering is today. Building a platform for self service in the cloud has more in common with product development than the platforms delivered historically by IT infrastructure teams. One of the drivers for the DevOps trend was the divergence of IT development and operations over the last few decades, but this was different in the mainframe world. In many ways, today’s platform engineering teams are more mature process-wise thanks to the demands of multi-tenant cloud applications.
The term “platform engineering” has exploded in IT. Explainers and articles are rife about platform engineering’s boundless implications. Some are defining it as a niche battle, others are calling it the DevOps killer, and some are projecting it as a million-dollar career. Whatever it is, findings show that it is at the peak of the hype cycle, and is settling into a new standard.
In this episode of On-Premise IT podcast, host Stephen Foskett and guests, Steven Dickens, VP and Practice Leader at The Futurum Group, and Michael Levan, Kubernetes and Platform Engineering Specialist, lift the blinds obscuring this new sensation.
The answer lies somewhere in the middle. The proclivity to slap new labels on old things is not new in marketing. The hype about platform engineering is somewhat the same. “We’ve been doing platform engineering for a really long time. It just has a name and a focus point now, but it’s not something that just popped out of nowhere,” says Levan.
Dickens likens it to the role of Mainframe developers. “The Mainframe guys speak in different tongues and worship different gods than the distributed and cloud guys, but if you took away the nomenclatures and actually looked at the job, it would be the same functional work.”
So why it being loved to death now? Because platform engineering does what software delivery processes benefit from most. It drives standardization and automation.
In a way, platform engineering is like the Hibachi experience. At a traditional Hibachi-style Japanese place, diners select their choice of noodles, meat, broth, sauce and toppings from the counter. At the bar, the chefs wield their knives, chopping, grilling, and cooking the ingredients into a hearty bowl of goodness.
Platform engineers do the same thing for the development environment. Platform engineering is the methodology to bring disparate components together into a platform in a way that makes sense, ultimately elevating the developer’s experience. In doing so, it alleviates the challenge of having to constantly worry about the platform.
The modern stack that engineers interface with can be broadly divided into three categories – the platform, the capabilities and the UI. The approach abstracts away complexity at all three levels, making sure that platform users can access the self-service features more easily. Sounds a bit like DevOps, right?
Platform engineering in the cloud era is a community position, not a technical one, says Levan. It encourages the infrastructure team to step into the developers’ shoes for the first time, and see things their way. “Platform engineering has two primary goals – go into systems thinking about customer service, and have a product mindset. When you combine those two things, your job is literally to help people,” explains Levan.
This is where its likeness with DevOps can be seen. In the 2000s, companies did platform engineering the traditional way – the platform engineers tuned the platform, the developers built the applications. There was no real interaction or exchange between the two workgroups.
But as years passed and new technology approaches came about, thought leaders saw that there is merit in bringing the two departments closer together. In this new culture, platform engineers and developers are to function transparently to improve application delivery. They deduced that overlapping software development with not only infrastructure, but also operations and product management will mature the processes, greatly contributing to the organizational growth and success.
“Platform engineering is all about quality engineering. One of the big reasons why I became self-employed a couple of years ago was because I didn’t want to throw a duct tape in my environments anymore. I’m just really happy that the entire tech community is seeing the same thing now,” says Levan.
What is shaping the rising popularity of platform engineering is its maturity. At the core, today it is about creating order in chaos. Amid infinite workflows, tools and technologies, platform engineering fosters a consistent and standard environment that affords developers a predictable experience, and boosts productivity and efficiency, not only by freeing them to do their work, but by also eliminating errors and guesswork that frequently cause bottlenecks and delayed release cycles.
“Focusing on nonfunctional requirements, putting quality code into production and infrastructure mattering again is really key,” says Dickens.
As companies rethink their approach to software development, platform engineering shines the spotlight on ways CTOs can close gaps and build bridges between separate teams, and solve bigger problems and eventually achieve shorter time to market.
For more, be sure to give the podcast – Platform Engineering Isn’t Just DevOps Renamed – a listen.
Gestalt IT and Tech Field Day are now part of The Futurum Group.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: Platform Engineering Isn’t Just DevOps Renamed
Now that businesses have deployed modern applications in the cloud they are starting to ask whether it might be more attractive to run these on-premises. This episode of the On-Premise IT podcast features Jason Benedicic, Camberley Bates, and Ian Sanderson discussing the pros and cons of cloud repatriation with Stephen Foskett. A recent blog post by 37 Signals got the Tech Field Day delegates talking about the reality of running modern applications in enterprise-owned clouds, whether in the datacenter or co-located. Certainly the hardware and software are available to move applications on-prem, and some workloads may be better served this way. Most of the necessary components to run modern web applications are available on-prem, from Kubernetes to Postgres to Kafka, but these can prove difficult to manage, which is one of the things as-a-service customers are paying for. Looking back to the debut of OpenStack, enterprises have wanted to run applications in-house but they found it too difficult to manage. OpenShift is much more attractive thanks to the support and integration of the platform, but many customers have financial and administrative reasons for as-a-service deployment. It might not be a mass exodus, but there are plenty of examples of repatriation of modern applications.
A new trend coming out of the enterprise IT industry is cloud repatriation. The chatter picked up when 37signals, a SaaS project management company, publicly announced that it saved $1 million by pulling apps away from public cloud. According to CTO, David Heinemeier Hansson, repatriation has shrunken the company’s cloud spend by 60%, and is projected to save an estimated $10 million over the next five years.
And theirs’ is not an isolated case. Skyrocketing costs of data and storage in the cloud have caused a lot of companies to pull away and migrate back to on premise datacenters in the last few years. Seagate has built its own platform to deploy web applications that runs in their private datacenter on owned hardware. More recently, LinkedIn has called off plans to move workloads from on-site to Azure Cloud.
So are companies really abandoning their cloud computing dreams and hauling wares back to where they started? At the recent On-Premise IT Podcast, host Stephen Foskett addressed this question that’s lately been the talk of Silicon Valley.
When considering relocating technology, the reasonings fall into two main buckets – cost and control. “As we went into 2024, a lot of very large enterprises are concerned about costs. So there is this ongoing effort for cost management, and what is happening is a recalculation or reevaluation of where the workloads are to be placed and why. That workload rationalization has been going on for some time,” notes Camberley Bates, VP Practice Lead at The Futurum Group.
Enterprises’ rationale behind migrating to cloud was to reduce OpEx. The cloud offered an attractive answer to the surging cost problem in on-premise datacenters. The promise however soured as companies started to struggle with cost blowouts. Despite adapting their operating principles and practices to rein in the spendings, an optimized cloud value has remained unrealized.
After expending a notably large amount of time and resources to get to cloud, when a company decides to withdraw, it reflects as poor planning. Much like in all financial decisions, the sunk-cost fallacy creeps in. And to keep the cloud obsession going, hyperscalers hook users in with free credits that give them a free pass to start down the road.
Spurred by dependency fear and cost and ownership concerns, many big enterprises have started bringing selected applications on-site as part of their workload placement strategy.
“A few years ago, it was a cloud-first mentality which we’re moving away from today with the hybrid approach, but it’s a very interesting marketplace in terms of options of where you can repatriate to in terms of the software stack,” says Ian Sanderson, Product Manager.
One of the things that makes the argument of going back on-premises seem valid today is the evolution of datacenter computing. “Since the cloud came about, we’ve seen a lot of step change in on-premise compute. We have gone from average systems of 4 cores to up to systems with 64 cores. So you could pack a lot of compute into a small space at a small cost,” points out Jason Benedicic, independent consultant.
A growing technology ecosystem is making shifting applications possible. “There’s a lot more off-the-shelf products for running clouds. Kubernetes and containers have come a long way. So the skill ramp-up needed to build and run your own modern application stack is lessened – I don’t think it’s completely removed, it’s not as easy as virtualization is – but there’s a lower barrier to entry. There’s a cheaper, more dense hardware aspect and those come together to make repatriation a possibility,” he adds.
Although technological advances lend users freedom to place their workload anywhere that offers maximum cost, performance and security payoffs, lifting and shifting too has its trade-offs. The cloud has monopoly over a few things that enterprises can’t pass up on, especially with the wide-spread adoption of AI. For one, on-premise infrastructures can barely match the agility, speed and iteration of public cloud.
“If you run a startup business with a couple of DevOps engineers and a fairly small team, it is going to be a daunting proposition to run all of it yourself. It’s possible, but the question is, what are the hidden costs and where do they lie,” cautions Benedicic.
But, increasingly, data costs in the cloud are driving companies to rethink the strategy. “Talking about the issue of cost analysis, we’ve seen a decline in the cost of server instances. We have not seen that same kind of cost basis on the data side,” notes Bates.
Thankfully, modern containerized applications have some amount of portability built in. “With serverless stuff, there’s some level of interoperability but there are not a huge number of serverless platforms out there that are mainstream,” Benedicic says.
Companies like Red Hat and IBM have solutions that make quick work of installing on-prem environments. The rise of OpenShift has been game changing in the way people think about running private cloud. Red Hat OpenShift is an open-source container application platform. The on-premise PaaS flavor is self-managed and comes with on-prem support for maximum ease.
Red Hat is one of the companies that is building a full suite of tools that work together to make the transition easier. Things like deployment blueprints that serve as guides are extremely helpful to get users started.
Workload repatriation need not be a binary decision. In many big enterprises, cloud repatriation may have taken off, but it is not a quit-the-cloud movement that it has been made out to be. Amid an economic downturn, companies are trying to tighten their budget and deciding where a workload best resides is the cornerstone of that. A hybrid placement approach will ensure a more natural distribution of workloads across cloud and datacenters than we have seen before.
For more, be sure to check out the On-Premise IT Podcast episode – Cloud Repatriation Is Really Happening – to follow the discussion.
Read more about Cloud Repatriation and what some possibilities are: Our cloud exit has already yielded $1m/year in savings
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: Cloud Repatriation is Really Happening
InfiniBand is the king of AI networking today. Ethernet is making a big leap to take some of that market share but it’s not going to dethrone the incumbent any time soon. In this episode, join Jody Lemoine, David Peñaloza, and Chris Grundemann along with Tom Hollingsworth as they debate the merits of using Ethernet in place of InfiniBand. They discuss the paradigm shift as well as the suitability of the protocols to the workloads as well as how Ultra Ethernet is similar to another shift in converged protocols – Fibre Channel over Ethernet.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: Ethernet Won’t Replace InfiniBand for AI Networking in 2024
AI is going to accelerate development of malware everywhere from code to prompts for social engineering. But tools can be used for defense as well as offense. In this episode of the On-Premise IT Podcast, Tom Hollingsworth is joined by Girard Kavalines, Ziv Levy, and Matt Tyrer as they debate the impact that AI will have on malware development in 2024 and beyond. Hear how AI can drive automation on both sides of the security spectrum as well as how we can better prepare to face an onslaught of assisted attackers.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: AI Is Going To Make Malware Worse
Users are always going to blame the connectivity medium for issues and we just have to accept it. In this episode, Sam Clements, Troy Martin, and Darrell DeRosia join Tom Hollingsworth to discuss why users are adamant that the wireless is the problem when it’s always something else. They discuss why IT professionals should focus less on blame shifting and more on creating an environment that provides resolution even if it’s not their problem. The episode wraps up with suggestions for professionals to create an environment better suited to meeting user expectations.
Follow us on Twitter! AND SUBSCRIBE to our newsletter for more great coverage right in your inbox.
© Gestalt IT, LLC for Gestalt IT: It’s Always the Wi-Fi
The podcast currently has 248 episodes available.