Executive Summary
The data centre industry is entering a new era defined by artificial intelligence (AI), high-performance computing (HPC) and the accelerating demand of hyperscale platforms. Traditional facilities, designed for racks consuming just 5–10 kilowatts (kW), are no longer capable of supporting the power and cooling requirements of next-generation workloads.
London, with its limited land availability and constrained power grid, stands at the centre of this challenge. Operators must now deliver significantly more compute capacity within the same footprint while aligning with sustainability mandates. Rear Door Heat Exchangers (RDHx) present a practical, scalable and sustainable solution - enabling rack densities of 40–100 kW and beyond. This paper explains why high-density is no longer optional, how RDHx technology addresses the critical cooling challenge and why Arlofin is uniquely positioned to deliver the infrastructure of the AI era.
The Evolution of Data Centre Density
Over the past three decades, data centre rack densities have increased at an exponential rate. In the 1990s, enterprise workloads typically required no more than 2–5 kW per rack. The early 2010s saw this figure rise to 10–15 kW as cloud computing services gained dominance. Today, with AI clusters and HPC systems consolidating immense compute capacity into fewer racks, densities of 40–100 kW per rack are quickly becoming the new normal.
This transition reflects a fundamental change in computing — a shift away from distributed workloads toward concentrated GPU clusters and specialised servers. The consequences of this change are profound, extending across electrical distribution, facility layout and, most urgently, cooling infrastructure.
Why High-Density Now
The imperative for high-density deployments is driven by three converging trends.
First, AI and HPC workloads demand enormous amounts of compute power. Training large language models, for example, can require megawatts of capacity consolidated into a handful of racks.
Second, hyperscale clients increasingly expect colocation partners to provide 40–100 kW per rack as standard, shifting industry benchmarks overnight.
Finally, in metropolitan regions such as London, the economics of land and energy availability mean operators must maximise compute per square metre. High-density is therefore no longer a matter of technical preference — it is a commercial and strategic necessity.
The Cooling Challenge
Cooling is the critical bottleneck for high-density deployments. Conventional air-based cooling methods, such as Computer Room Air Conditioning (CRAC) units or hot/cold aisle containment, can manage up to around 15–20 kW per rack. Beyond that threshold, airflow requirements and fan energy increase sharply, creating inefficiencies, hotspots and unsustainable operational costs.
While direct-to-chip and immersion cooling technologies provide promising long-term answers, both require new hardware ecosystems and significant capital investment. Rear Door Heat Exchangers offer an immediate, proven alternative — a liquid-assisted cooling solution that supports densities up to 100 kW per rack without disruptive retrofits.
Rear Door Heat Exchangers Explained
RDHx technology works by integrating a chilled-water coil into the rack’s rear door. As servers expel hot exhaust air, the air passes through the coil, where heat is absorbed and neutral air is released back into the room. This approach removes heat at the source, eliminates the mixing of hot and cold air and significantly reduces the burden on facility-wide cooling systems.
RDHx units can operate passively, relying on existing server fans, or actively, with additional fans integrated for higher loads. They are inherently modular, capable of being deployed rack by rack and equally suitable for retrofits in legacy facilities or new-build, high-density campuses.
The Financial Case for RDHx
The financial rationale for adopting RDHx is compelling. Although the technology requires upfront capital investment, its retrofit capability significantly reduces costs compared to immersion or direct-to-chip cooling. Operating expenses are lower due to reduced fan energy consumption and higher cooling efficiency.
For colocation providers, RDHx enables premium pricing for high-density space, particularly for AI and hyperscale clients who demand such capability. In most deployments, operators achieve stronger returns per square metre and a payback period of three to five years.
Sustainability and Regulation
Environmental performance is no longer optional. Regulators in the UK and EU are tightening requirements on Power Usage Effectiveness (PUE) and Water Usage Effectiveness (WUE), while investors and end users demand demonstrable ESG commitments.
RDHx contributes directly to sustainability outcomes. By removing heat at source, it reduces energy waste in cooling systems and enables lower PUE scores. Crucially, the heat captured can be reused in district heating networks, contributing to circular energy systems. For operators in London, where scrutiny of environmental impact is intensifying, RDHx provides both compliance and competitive advantage.
Case Studies and Deployment Models
The practical value of RDHx is already evident across industries.
- A London financial services firm consolidated 300 × 5 kW racks into 30 × 50 kW racks cooled with RDHx. The result was a 70% footprint reduction and a 30% decrease in cooling energy costs.
- A European colocation provider deployed RDHx to support AI tenants at 40–60 kW per rack, avoiding the need for a costly immersion retrofit.
- A German research institution adopted RDHx for HPC workloads up to 80 kW, achieving a PUE of 1.2.
These examples demonstrate RDHx’s adaptability across finance, research and hyperscale markets.
Scaling to 100 kW and Beyond
RDHx systems are effective up to 100 kW per rack when combined with robust facility design. For ultra-high-density workloads, hybrid models are emerging in which RDHx handles general rack exhaust while direct-to-chip cooling targets the hottest components, such as GPUs and CPUs.
Arlofin’s modular designs embrace this hybrid approach, ensuring flexibility for today while future-proofing for tomorrow’s ultra-dense deployments.
Strategic Outlook
The decade ahead will be defined by operators who can support the demands of AI-driven workloads. In constrained markets like London, high-density will be the only viable growth strategy.
Rear Door Heat Exchangers represent a mature, cost-effective and sustainable bridge technology. They are deployable today, align with regulatory requirements and create a pathway toward even more advanced cooling methods. Operators who adopt RDHx now will secure first-mover advantage in serving hyperscale and AI demand, while competitors remain constrained by legacy infrastructure.
About Arlofin
Arlofin is a new kind of data centre platform built for the AI era. Our facilities are designed to accommodate densities of 10–120 kW per rack, integrating rear door heat exchangers, liquid-to-liquid cooling and modular prefabricated construction. We combine scalability with sustainability, embedding renewable energy and hydrogen-ready systems into our designs.
With projects planned across the UK, Europe and the US, Arlofin delivers the infrastructure that powers innovation while reducing environmental impact. Our mission is clear: to build the data centres of the future - efficient, sustainable and ready for AI.