The Saga of Liquid Cooling

The Saga of Liquid Cooling

The Saga of Liquid Cooling 1500 763 Enabled Energy
Liquid cooling has been a buzzword in the information technology industry for a long time.

While some may not be old enough to remember it, liquid cooling of microprocessors has, in fact, been around since the beginning, due primarily to the superior heat transfer characteristics of liquids as opposed to air.

Perhaps the most remarkable of the early liquid cooled computers were those developed by Seymour Cray, a pioneer in the industry, first with Control Data and then later then Cray Research. Cray’s approach utilized special electrically non-conductive refrigerants, such as “Flourinert”. The microprocessors were submerged directly in those refrigerants to remove heat. What was remarkable about Cray’s approach was that he designed the computer enclosures to feature this aspect of the machine with backlit, colored tubes of bubbling refrigerant in the front looking like something out of a science fiction movie.

The Cray-2 CPU and Cooling Tower, introduced in 1985, immersed dense stacks of circuit boards in a non-conductive liquid called Fluorinert™, which was cooled in a tank. Photo: © Computer History Museum/Mark Richards

It was the advent of nimble, air-cooled rack-mounted servers in the late nineties and early two-thousands, that effectively ended the first wave of liquid cooling. The information technology business demand for rapid deployment, flexible scaling (up and down), and low first cost put notions like energy efficiency on the back burner.


It was the advent of nimble, air-cooled rack-mounted servers in the late nineties and early two-thousands, that effectively ended the first wave of liquid cooling.

Consequently, through the first two decades of the new millennium, the focus has been on being able to handle high heat loads using air. Air cooling can be done with chillers serving chilled water to computer room air conditioning units that blow air into floor plenums. Rooftop units with direct-expansion cooling are more straight forward air-cooling solutions. Many data centers have cleverly taken advantage of free-cooling features to reduce energy usage of their air-cooled solutions as well.

Energy Efficiency Makes a Comeback

So great was the growth of the I.T. industry’s energy footprint through the early 2000’s that it caught the interest of the U.S. EPA. The EPA issued a formal report to congress, in late 2007, full of “doom and gloom” about energy use from data centers. The EPA report was, in part, the catalyst for many energy improvement initiatives that occurred through the late 2000’s and early 2010’s. Most notably, the development of the Power Usage Effectiveness (PUE) metric, by “The Green Grid”.


The focus on energy efficiency prompted data center engineers to revisit liquid cooling strategies abandoned in favor of rapid data center expansion.

Armed with a useful metric, the I.T. industry now was able to establish benchmarks for energy performance and seemed at that time positioned to wrangle the out-of-control growth in data center energy use. The focus on energy efficiency prompted data center engineers to revisit liquid cooling strategies abandoned in favor of rapid data center expansion.

Obstacles for Liquid Cooling

Liquid cooling will remain nothing more than a fun anecdotal topic for technical articles until several cross-disciplinary challenges are effectively addressed.

I.T. vs Facility Staff

Most people who work in the energy efficiency “world” have little sense of the type of business challenges that I.T. staff face on a day-to-day basis. Notably – expectations from the business on almost “instantaneous” deployment of new servers with 100% success, 100% uptime, any number of changing operating system and application parameters which can twist hardware into pretzel on a moment’s notice, technical support that is outsourced globally to people who may or may not have any personal investment in your success, and constant threats from malware and the people who make it.

The truth is that most I.T. staff live in a world that is starting to feel like something from an Upton Sinclair novel. To survive in that type of ecosystem, I.T. people do what technical people under siege often do – they follow the “KISS” principle (“Keep It Simple Stupid”).  And in the I.T. world, “KISS” means that the specification for the physical “host” that undergirds their virtual server world is something that doesn’t change very much. Most I.T. hardware is provisioned with every technical option they can afford and then they keep it all exactly the same. The truth is, I.T. staff probably stand little to gain from changing to a liquid-cooled host, but have a lot to lose if that liquid cooled host fails to perform. There are now trends in the industry that indicate that major hardware providers are on the bandwagon for liquid cooling, but it is still going to take time.

After all, are you planning to buy one of the first all-electric pick-up trucks?


I.T. staff probably stand little to gain from changing to a liquid-cooled host, but have a lot to lose if that liquid cooled host fails to perform.

Outsourced I.T. / Colocation Data Centers

The drive for improved data center energy efficiency has been hamstrung through the emergence of widespread outsourcing of I.T. resources, either to co-location facilities, full cloud implementation, or a combination of both. Outsourcing of I.T. assets is tantamount to making data center energy efficiency (among other things) “someone else’s problem”, something like the way people have no problem leaving all the lights on in their hotel room when they leave. If a colocation facility is not rewarded in any way for energy efficiency from its customers, there’s no need to make energy efficiency a priority. Especially, if it comes at the cost of creating up-time risk.

As it happens, most enterprises are relying heavily on the outsourced I.T., but are beginning to demand sustainability from their outsourced I.T. providers. Hence, there is a renewed interest in liquid cooling to find the next frontier of energy efficiency. I.T. staff still need a confidence boost when working with an outsourced facilities staff.  Whose fault will it be if the new liquid cooled server goes down?

ASHRAE Gets on Board

ASHRAE’s Technical Committee 9.9 published the first edition of its “Liquid Cooling Guidelines for Datacom Environments” in 2006. Then ASHRAE published a revised standard in 2014. The guidelines set forth the challenges associated with effectively deploying liquid cooling systems. Namely, enhanced water quality requirements that are substantially different from what is typically maintained on building hydronic systems.

The document also focused on capabilities for ride-through (i.e. to provide continuous cooling during transitions to and from a standby generator, including circulating pumps powered from UPS, thermal storage, and serious application of principles of concurrent maintainability (as defined by the Uptime Institute). In most cases, it’s going to make sense to create new Technology Cooling Systems (TCS’s) for support of liquid cooling systems.  In most cases, it is not going to make sense to connect those liquid cooled assets to existing building hydronic systems.

Free Cooling & Waste Heat Usage

Beyond increasing computer performance and improving cooling efficiency, the additional energy efficiency opportunity for liquid cooling is waterside economizing – or “free-cooling” – and even reclaiming the heat produced by those assets. The revised ASHRAE guidelines have set forth five “classes” of liquid cooled datacom water supply (Class W1 through W5), based on the temperature of the water being provided to the device. The highest class (W5) allows water supplied to the chip to be as high as 113 degrees F. This is where it gets interesting from an energy perspective, because not only can that temperature be maintained with economizing only (i.e. no compressor operation – ever), but beyond that, it means that the return from the I.T. TCS will be high enough for direct recovery for heating a building.

We Can Do it!

The good news is that we can do it. We can design Technology Cooling Systems that can deliver energy efficient cooling with the reliability demanded by data center I.T. staff. It’s all been done before. Even though Seymour Cray is no longer here to see it, the design of high reliability, high performance liquid cooling infrastructure is not new.

Like the earlier business decisions to go to air cooled rack-mounted servers and the decision to outsource I.T. to colocation and cloud-based data centers, the decision to Liquid Cool servers must be driven by business. The good news is that once the appropriate manufacturer supported I.T. resources appear on the market, the decision for whether to deploy those resources remotely in a colocation facility or bring them back on premises for energy efficiency purposes is a challenge that can be readily overcome.

About the Author

Nate Clyde, P.E., DCEP brings over 20 years of data center design / build expertise to Enabled Energy as Vice President of Technical Services. He is often heard to say that the “devil is in the details” when it comes to mission critical infra-structures.  Nate works every day to lead our team as EE helps their clients with the “details” of their energy saving projects.

Back to top
en_USEN