The End of Air Cooling's Dominance
For three decades, data centers have been cooled the same way: push cold air through rows of server racks, collect the hot air, cool it back down, and repeat. This approach — variations of which have served as the mechanical backbone of every data center built since the 1990s — is reaching its physical limits. The reason is artificial intelligence.
AI computing hardware, particularly the GPU clusters used for training large language models, generates heat at densities that air cooling simply cannot handle efficiently. A traditional cloud computing rack might dissipate 6 to 10 kilowatts of heat. An AI training rack routinely generates 40 to 100 kilowatts, with cutting-edge deployments exceeding 120 kilowatts per rack. At these densities, moving enough air through the equipment to prevent thermal damage requires so much fan power and so much physical space that the approach becomes economically and architecturally impractical.
The solution is liquid cooling — using water, specialized fluids, or other liquids to remove heat directly from computing components. Liquid is roughly 3,500 times more efficient at absorbing heat than air, which means liquid cooling systems can handle the thermal loads of high-density AI computing in a fraction of the space and with a fraction of the energy that air cooling would require.
For the construction industry, the shift to liquid cooling is not an incremental change — it is a fundamental transformation in how data centers are built. The piping systems, mechanical rooms, floor configurations, and construction sequences that contractors have spent two decades perfecting for air-cooled data centers are being replaced by entirely new approaches. Contractors who understand these changes will be well-positioned for the next generation of data center construction. Those who do not may find themselves unable to compete.
The Performance Case
The performance advantages of liquid cooling over air cooling are substantial and well-documented.
Energy Reduction
Liquid cooling systems typically reduce the total energy consumed by the cooling infrastructure by 25 to 35 percent compared to equivalent air-cooled systems. This reduction comes from two sources: the elimination of large air-handling equipment and the associated fan energy, and the ability to reject heat at higher temperatures, which improves the efficiency of chillers and allows for more hours of economizer (free cooling) operation.
For a 100-megawatt data center, a 30 percent reduction in cooling energy translates into roughly 10 to 15 megawatts of saved electrical capacity — power that either reduces operating costs or can be allocated to additional computing equipment. At typical utility rates, this energy savings is worth $8 million to $15 million per year, which easily justifies the incremental construction cost of liquid cooling systems.
Space Reduction
Liquid cooling eliminates the need for the large air plenums (raised floors or overhead duct systems) and the computer room air handlers or in-row cooling units that dominate air-cooled data center floor plans. This space savings typically amounts to 30 to 40 percent of the data hall floor area, which can be repurposed for additional computing equipment or can reduce the overall building footprint.
The space reduction has a direct impact on construction cost. A smaller building requires less structural steel, less concrete, less roofing, and less site work. While these savings are partially offset by the cost of the liquid cooling infrastructure itself, the net effect is typically a 5 to 15 percent reduction in total construction cost per megawatt of IT capacity.
Density Support
Perhaps most importantly, liquid cooling enables the extreme power densities that AI computing requires. While air cooling struggles above 30 to 40 kilowatts per rack, liquid cooling can comfortably handle 80 to 120 kilowatts per rack and, in some configurations, significantly more. This density support is not a future requirement — it is a current one, driven by the deployment of NVIDIA's H100 and H200 GPU clusters and the even denser hardware expected in the next generation.
Types of Liquid Cooling
There are two primary approaches to liquid cooling in data centers, each with distinct construction requirements.
Direct-to-Chip (Cold Plate) Cooling
Direct-to-chip cooling, also known as cold plate cooling, uses metal plates attached directly to the hottest components on a server (typically CPUs and GPUs). Chilled water or a specialized coolant flows through channels in the cold plate, absorbing heat from the component and carrying it to a heat rejection system outside the data hall.
Construction implications:
Cold plate systems require a network of liquid distribution piping throughout the data hall, with supply and return manifolds serving each row of server racks. This piping system must be designed and installed to exacting standards, as any leak could damage computing equipment worth millions of dollars.
The piping is typically small-diameter (1/2 inch to 2 inches) and routed above the server racks or through dedicated trays. Quick-connect fittings at each rack allow servers to be connected and disconnected without draining the entire system. The piping materials are usually copper or stainless steel, chosen for their corrosion resistance and compatibility with the coolant fluid.
The coolant distribution units (CDUs) that manage fluid temperature and pressure are typically located at the row level or at the perimeter of the data hall. These units are industrial-quality heat exchangers that transfer heat from the building-level coolant loop to the facility-level chilled water system.
For pipefitters, cold plate system installation requires precision skills — clean cutting, proper brazing or welding, thorough flushing and pressure testing — that are more akin to pharmaceutical or semiconductor piping than conventional HVAC work. The leak tolerance is zero. Period.
Immersion Cooling
Immersion cooling takes the liquid cooling concept further by submerging entire servers in a tank of non-conductive, non-corrosive fluid (typically an engineered fluorocarbon). Heat is transferred from the computing components to the fluid, which is then circulated to heat exchangers for cooling.
There are two variants:
Single-phase immersion keeps the coolant in liquid form throughout the process. Servers are submerged in tanks filled with a dielectric fluid that absorbs heat without changing state. The warmed fluid is pumped to external heat exchangers, cooled, and recirculated.
Two-phase immersion uses a coolant that boils at a low temperature (typically 49 to 60 degrees Celsius). As the servers generate heat, the coolant boils, and the resulting vapor rises to condensers at the top of the tank, where it is cooled back to liquid and drips back down. This phase-change process is extremely efficient at heat transfer.
Construction implications:
Immersion cooling fundamentally changes the data hall floor plan. Instead of rows of upright server racks, the space contains rows of horizontal tanks, each holding one or more servers submerged in fluid. The floor must support the significant weight of the fluid-filled tanks (a single immersion tank can weigh 3,000 to 5,000 pounds) and must be designed to contain potential fluid spills.
The piping systems for immersion cooling are different from cold plate systems — larger diameter, lower pressure, but still requiring leak-free installation. The coolant fluids used in immersion cooling are expensive ($200 to $400 per gallon for some fluorocarbons), which makes any fluid loss costly.
Immersion tanks are typically pre-manufactured and shipped to the site, where they are installed by specialty contractors. The integration of tanks with the building's mechanical and electrical systems requires careful coordination between the immersion cooling vendor, the mechanical contractor, and the electrical contractor.
What Changes in Construction
The shift from air cooling to liquid cooling affects virtually every aspect of data center construction. Here are the most significant changes.
No More Raised Floors (Usually)
Raised floors have been a defining feature of data center construction for decades, but liquid-cooled data centers frequently eliminate them entirely. Without the need for an underfloor air plenum, the raised floor adds cost without benefit. This changes the construction sequence (no raised floor installation phase), reduces project cost ($15 to $25 per square foot saved), and simplifies the structural design (no need to account for the raised floor load in addition to equipment loads).
Some operators retain raised floors even in liquid-cooled facilities for cable management purposes, but the trend is strongly toward slab-based designs with overhead cable routing.
Different Piping, Different Skills
The piping systems in a liquid-cooled data center are fundamentally different from those in an air-cooled facility. Air-cooled data centers have large-diameter chilled water piping serving air handlers, with relatively few connections per unit area. Liquid-cooled data centers have extensive networks of small-diameter piping serving individual racks or tanks, with hundreds or thousands of connections per data hall.
This shift has significant workforce implications. The pipefitters who install liquid cooling systems need precision skills that go beyond conventional HVAC piping. Clean-room piping practices, leak detection and testing, and experience with specialty coolant fluids are all essential. Contractors who can develop or acquire these capabilities will have a competitive advantage in the liquid cooling market.
Smaller Mechanical Rooms
Liquid cooling systems are physically smaller than equivalent air-cooling systems. The CDUs, pumps, and heat exchangers that serve a liquid-cooled data hall occupy a fraction of the space that computer room air handlers and their associated ductwork would require. This translates into smaller mechanical rooms, which frees up space for computing equipment and reduces overall building size.
Simplified Building Envelope
Air-cooled data centers require careful design of the building envelope to manage air flow, prevent unwanted air infiltration, and maintain precise temperature and humidity control. Liquid-cooled data centers have less stringent envelope requirements because the critical heat transfer happens in a closed liquid system, not in the building's air. This can simplify and reduce the cost of the building shell and architectural systems.
Modified Fire Protection
The fire protection systems in liquid-cooled data centers differ from those in air-cooled facilities. Immersion cooling tanks containing non-conductive, non-flammable fluids have different fire risks than rows of air-cooled server racks, and fire protection system design must account for these differences. Pre-action sprinkler systems remain common in liquid-cooled facilities, but the clean agent suppression systems used in air-cooled data halls may be unnecessary or different in liquid-cooled environments.
Who Is Building Liquid-Cooled Data Centers?
Liquid cooling adoption is being driven primarily by the hyperscale operators deploying AI computing hardware at scale.
NVIDIA partnership programs: NVIDIA, whose GPU hardware is the dominant platform for AI computing, has been actively promoting liquid cooling for its highest-density products. The company has established partnerships with multiple liquid cooling vendors and has published reference architectures for liquid-cooled data centers. These reference architectures are influencing how operators and their construction partners design and build facilities.
Microsoft: Microsoft has deployed liquid cooling at several of its data centers, including both cold plate systems for GPU clusters and experimental immersion cooling for broader applications. The company's standardized data center designs increasingly incorporate liquid cooling provisions, even in facilities that initially deploy air cooling.
Google: Google has been experimenting with liquid cooling for several years and is deploying it at scale in its newest facilities. The company's proprietary cooling designs are among the most advanced in the industry.
Meta: Meta has tested immersion cooling at its Papillion, Nebraska campus and is evaluating broader deployment across its facility portfolio.
Colocation operators: Several major colocation operators, including Equinix, Digital Realty, and CyrusOne, are offering or planning to offer liquid-cooled hosting environments for customers with high-density computing needs.
What Contractors Should Do
For mechanical contractors and general contractors serving the data center market, the shift to liquid cooling requires proactive preparation.
Train your pipefitters. The precision piping skills required for liquid cooling systems are not common in the conventional HVAC workforce. Invest in training programs that teach clean piping practices, specialty fluid handling, and leak testing protocols. Workers with pharmaceutical or semiconductor piping experience are particularly valuable hires.
Develop relationships with cooling technology vendors. The liquid cooling market is served by specialized vendors — GRC (Green Revolution Cooling), LiquidCool Solutions, Asetek, CoolIT Systems, and Submer, among others — whose products require trained installers. Establishing partnerships with these vendors can provide access to training, technical support, and preferred installer status.
Understand the design implications. Liquid cooling changes everything from structural loading to fire protection to electrical distribution. Make sure your estimating and project management teams understand these implications so they can accurately scope, price, and execute liquid-cooled data center projects.
Plan for hybrid facilities. Many data centers will operate hybrid cooling environments, with liquid cooling for high-density AI computing areas and air cooling for lower-density general computing areas. Contractors need to be capable of installing and integrating both types of systems within a single facility.
For a broader view of how data center construction is evolving, including workforce challenges and spending trends, see our construction spending forecast and our analysis of the construction workforce gap.
The Bottom Line
Liquid cooling is not a future technology — it is being deployed now, at scale, in the most advanced data centers being built. The shift from air cooling to liquid cooling represents the most significant change in data center mechanical systems in the industry's history, and it is creating both challenges and opportunities for the construction firms that build these facilities.
The contractors who invest now in developing liquid cooling capabilities — through training, hiring, vendor partnerships, and project experience — will be positioned to capture a growing share of the most technically demanding and highest-value segment of data center construction. Those who wait will find themselves competing for an air-cooled market that is shrinking as a share of total data center construction spending.
The future of data center cooling is liquid, and the future of data center construction belongs to the firms that can build it.
READ NEXT: Prefab and Modular Data Centers — The Factory-Built Future



