On Data Centers and Cooling

I spend a lot of time in data centers.  These are not your average “computer room” in an office someplace, but the large industrial scale data centers in “Silicon Valley” where large networks have a presence and exchange traffic with other data centers.  These places are loud and they are air cooled.  The amount of energy consumed in simply moving air around the place is tremendous.  First there will be the fans in the individual servers themselves.  Often there are many of these.  There are fans on the CPU and often other components such as the power supply.  Then there are fans that extract that hot air from the chassis into the rack or cabinet.  Then there might be a fan on the cabinet that exhausts the heat into the room.  In the room there are large air handling units that exchange the heat in the air to chilled water which is then pumped to the roof where the heat is exchanged to outside air.    Heat dissipation is the ultimate constraint in data center server density these days.  Every watt of power brought into the data center in the form of electricity must be exhausted in the form of heat.  The limit to the amount of power you can bring in is the limit of heat you can exhaust.  So the environmental management system inside the data center is the ultimate constraint on the amount of power you can provide to customers and is therefore the primary constraint to the number of servers you can place in a data center.

Most data centers, therefore, limit customers to a certain number of watts of power per square foot of rented space.  If you want more power, you need to rent more space which generally goes unused but the goal that the data center operator is trying to meet is to have 100% of the space rented at 100% of their air handling capacity minus a little cushion.  So I could place enough servers to use 100% of their air handling capacity but they would require me to rent the entire data center to do that.  Often they have the electrical capacity to handle many more servers but the constraint is still the amount of heat they can exhaust.   There are literally thousands (possibly millions in the larger data centers) of fans moving heat around.

These data centers are, in many cases, uncomfortable places to work.  First they are extremely loud and in many cases cold.  The ambient air temperature is kept low so that the air inside the racks and chassis of these computers provides sufficient cooling.  There is a better way.

It is time for liquid cooling in the data center.

Many years ago in a former career with a defense electronics manufacturer, I worked with liquid cooled electronics for military applications.  There are many places where one does not want noise from fans.  One way to eliminate fan noise is to use liquid cooling.  This technology has evolved in defense electronics to a fairly high standard.  There are standard fittings to use with liquid cooled electronics, purge valves, practices to eliminate damage from leaks, etc.  This isn’t anything new.  What is needed is a standard interface for liquid cooling to be developed by the industry so that computer chassis manufacturers can produce gear with the proper interface for the cooling system.  Once this is in place the data center goes quiet and the air temperature can be optimized for humans rather than for machines.  All of the energy spent moving air from inside a chassis to the heat exchangers in the data center can be eliminated.  Coolant from the warm side of each cabinet can be sent directly to the outside heat exchangers (chillers) with a great savings in energy, increase in data center server density, and a more comfortable environment for humans.