
Sign up to save your podcasts
Or


Colocation operators across the country are running out of time to deal with aging CRAC cooling equipment, and the pressure is coming from multiple directions at once.
First, there's the refrigerant problem. R-twenty-two production ended in twenty twenty. The only supply left is reclaimed, and prices have climbed over one thousand percent from what they used to cost. R-four-ten-A is next — the AIM Act cuts its supply by seventy percent starting in twenty twenty-nine. And the replacement refrigerants aren't compatible with existing systems. There's no retrofit option. Full replacement is the only path forward.
Then there's the density problem. NVIDIA's H-one-hundred GPU generation requires roughly forty kilowatts per rack. The newer GB two-hundred platform requires over one hundred twenty kilowatts and mandates liquid cooling by design. Most colocation facilities were built to handle three to eight kilowatts per rack. That gap isn't something better airflow management can fix.
Triton Thermal, a Houston-based data center liquid cooling firm, has published a guide walking colocation operators through exactly when to act and what their options look like. The guide outlines three paths forward.
Like-for-like CRAC replacement solves the refrigerant compliance issue and improves efficiency by twenty to thirty percent — but the density ceiling stays low. Converting to a centralized chilled water system with CRAH air handlers drops PUE from the one-point-seven to two-point-zero range down to one-point-four or one-point-five, with significant energy cost savings. The third path is transition to hybrid or full liquid cooling — direct-to-chip cold plates, rear-door heat exchangers, or immersion cooling — which supports thirty to over two hundred kilowatts per rack and opens the facility to AI and high-performance computing tenants.
The financial case is well-documented. Real-world replacement projects show payback periods between one-point-nine and three-point-eight years, with annual savings ranging from one hundred forty-four thousand dollars to over five million dollars depending on facility size.
The decision framework is practical. Operators still running R-twenty-two equipment should already be in replacement mode. Operators with R-four-ten-A units past twelve years, or PUE above one-point-seven, should be planning replacement in the next twelve months. Even operators with newer, well-performing equipment should be designing their transition strategy now — because twenty twenty-nine is closer than it looks, and equipment lead times are already stretching across the industry.
Triton Thermal works as a vendor-neutral integrator, helping colocation operators find the right cooling solution for their specific facility and tenant mix. More information is available at triton thermal dot com.
By UBCNewsColocation operators across the country are running out of time to deal with aging CRAC cooling equipment, and the pressure is coming from multiple directions at once.
First, there's the refrigerant problem. R-twenty-two production ended in twenty twenty. The only supply left is reclaimed, and prices have climbed over one thousand percent from what they used to cost. R-four-ten-A is next — the AIM Act cuts its supply by seventy percent starting in twenty twenty-nine. And the replacement refrigerants aren't compatible with existing systems. There's no retrofit option. Full replacement is the only path forward.
Then there's the density problem. NVIDIA's H-one-hundred GPU generation requires roughly forty kilowatts per rack. The newer GB two-hundred platform requires over one hundred twenty kilowatts and mandates liquid cooling by design. Most colocation facilities were built to handle three to eight kilowatts per rack. That gap isn't something better airflow management can fix.
Triton Thermal, a Houston-based data center liquid cooling firm, has published a guide walking colocation operators through exactly when to act and what their options look like. The guide outlines three paths forward.
Like-for-like CRAC replacement solves the refrigerant compliance issue and improves efficiency by twenty to thirty percent — but the density ceiling stays low. Converting to a centralized chilled water system with CRAH air handlers drops PUE from the one-point-seven to two-point-zero range down to one-point-four or one-point-five, with significant energy cost savings. The third path is transition to hybrid or full liquid cooling — direct-to-chip cold plates, rear-door heat exchangers, or immersion cooling — which supports thirty to over two hundred kilowatts per rack and opens the facility to AI and high-performance computing tenants.
The financial case is well-documented. Real-world replacement projects show payback periods between one-point-nine and three-point-eight years, with annual savings ranging from one hundred forty-four thousand dollars to over five million dollars depending on facility size.
The decision framework is practical. Operators still running R-twenty-two equipment should already be in replacement mode. Operators with R-four-ten-A units past twelve years, or PUE above one-point-seven, should be planning replacement in the next twelve months. Even operators with newer, well-performing equipment should be designing their transition strategy now — because twenty twenty-nine is closer than it looks, and equipment lead times are already stretching across the industry.
Triton Thermal works as a vendor-neutral integrator, helping colocation operators find the right cooling solution for their specific facility and tenant mix. More information is available at triton thermal dot com.