AI DC – Renaissance and New Thinking Required. Article 2 of 5
- datacenterprimerja
- Mar 31
- 7 min read
The Blind Spot the Industry Built Into Itself
James Soh

This article speaks most directly to C-level leadership and business planners. The implications run through to design and construction teams.
Article 1 closed with a proposition. The machine is the building again. For those of us who have spent careers in the data center industry, that is not just a historical observation. It is a strategic reorientation. And it raises an immediate question for C-level leadership: does your business model reflect it?
For most organisations in this industry today, the honest answer is no. Not because of poor leadership. Because of a blind spot that the industry built into itself over three decades, rationally and productively, in service of a world that has now changed.
Understanding how that blind spot formed is the first step toward thinking past it.
What the Disaggregation Era Built
The commodity compute era was a genuine achievement. The shift from integrated minicomputer systems to networked x86 servers democratised computing on a scale that would have been unimaginable in the Cray era. It built the internet economy. It created the software industry as we know it. And it built the data center industry into a mature, bankable, globally scaled asset class.
The x86 and cloud hybrid world that followed extended that achievement further.
Virtualisation, hyperscale, and the public cloud made compute accessible to organisations of every size. The data center industry industrialised around serving that world. Tier classifications, PUE metrics, white space per square metre, power capacity per megawatt. Standards emerged. Costs became predictable. The facility became a financeable infrastructure product with an understood risk profile and a stable tenant base.
That is not a small thing. An entire generation of C-level leaders built substantial businesses and real careers on those foundations. The assumptions that drove those businesses were not wrong. They were precisely right for the era that produced them.
The blind spot did not form because those leaders made mistakes. It formed because they optimised successfully for one world, and a different world arrived.
The Blind Spot
There is a question I have been asking C-level leaders in the data center industry across Asia-Pacific for the past two years.
When you plan a new data center, what do you start with?
The answer is almost always the same. Power capacity. Land. Grid access. Connectivity. Funding. Construction timeline. These are the inputs that drive the business case, the design brief, and the go-to-market model.
Nobody starts with the compute.
In the commodity x86 world, that was the right starting point. The compute was generic. Any tenant running standard servers could be accommodated in a well-designed shell. Diversity of tenants meant diversity of load profiles. Infrastructure was sized for peak with diversity applied. The facility was the product and the tenant was an abstraction, a kilowatt figure in a lease agreement. The mental model worked because the assumptions underneath it were valid.
The GPU cluster invalidated every one of those assumptions simultaneously.
Where the Knowledge Boundary Sat
To understand why, it helps to understand what changed about the industry's knowledge, not just its technology.
In the minicomputer era, the knowledge boundary of the data center industry included the computer completely. Designers and operators knew what the machine needed at every level. Its thermal characteristics. Its power draw under different workloads. Its interconnects. Its scheduler. Its behaviour under full load. The facility was built around that knowledge from first principles. When Seymour Cray chose immersion cooling for the Cray-2, it was not an exotic engineering choice. It was the natural result of understanding the machine deeply enough to know that air was physically insufficient. The building existed to serve the computer, and the people who built the building understood the computer.
When x86 servers arrived, the knowledge boundary contracted. The compute became generic and the industry stopped needing to know what was inside the rack. The rack became the boundary of professional responsibility. Then the data hall became the boundary. Standard servers with predictable power draws and standard thermal profiles did not require the facility team to understand the compute. The shell was the product. Everything inside the racks was the tenant's domain. That contraction was entirely rational because the compute was entirely generic.
That contracted knowledge boundary is exactly what is now failing the industry.
A 10-rack NVLink GPU cluster is a single computer. An NVIDIA SuperPod is a computer, drawing close to a megawatt, operating as one integrated system across every rack within it. The industry is looking at these systems through a knowledge boundary that stops at the rack, or at the data hall, and seeing a collection of servers. It is not seeing the computer. That is the blind spot in its precise technical form: not a failure of ambition, but a structural limit on what the industry currently considers within its scope to understand.
Closing that gap requires the same thing the minicomputer era required. The people who plan, design, and operate these facilities need to know the machine.
What Changed
The GPU cluster is not a server. It is a computer. A tightly integrated system where the rack is the unit of integration, not the individual machine. It runs at or near its nameplate power continuously. There is no meaningful diversity factor to apply. A 10 megawatt AI data center draws 10 megawatts. Not on a bad day. Every day. Liquid cooling is not optional. At 100 to 227 kilowatts per rack, air cooling is physically inadequate. The cooling infrastructure must be designed into the facility from the beginning, not retrofitted.
A C-level leader who applies traditional colocation assumptions to an AI data center business case will produce numbers that are wrong in ways that compound. CAPEX underestimated because liquid cooling infrastructure costs more than air. OPEX underestimated because full continuous power draw is the operating condition, not the emergency condition. Revenue assumptions misaligned because the product being sold is not white space. It is AI compute capacity, which is measured in tokens, not square metres.
The Counter-Example
In 2024, Elon Musk's xAI built the Colossus AI cluster in Memphis, Tennessee. One hundred thousand NVIDIA GPUs. The facility was commissioned in under a year.
The reason it moved fast is instructive. The team did not start with the building. They started with the compute. Every decision flowed from what the GPU cluster required. The facility was the support system. Prefab and offsite manufacturing were used aggressively because precision-engineered components built under factory conditions serve AI infrastructure requirements better than field assembly. The construction timeline was a consequence of starting from the right place.
Colossus is not an outlier. It is the model. It is what happens when C-level leadership understands that in an AI data center, the compute is the business and the building is the enabler.
The Scale of What Is Coming
This is not a niche market transition. Consider the demand signal.
At GTC 2026, NVIDIA CEO Jensen Huang announced purchase orders for Blackwell and Vera Rubin systems reaching one trillion dollars through 2027. That is double the estimate from twelve months earlier. NVIDIA Cloud Partner deployments doubled in a single year, reaching more than 1 million GPUs and 1.7 gigawatts of AI capacity.
Seventeen major enterprise software platforms, including Adobe, Salesforce, SAP, and ServiceNow, committed to building on NVIDIA's Agent Toolkit at GTC 2026. These are the platforms that run the workflows of the Fortune 500. When they build for agents, enterprise inference demand becomes structural, not speculative. The agents embedded in their workflows are production systems generating tokens continuously, driving demand that compounds every quarter.
The C-level leader sizing an AI data center today is not sizing for current workloads. They are sizing for a demand curve that is still accelerating. Infrastructure decisions made today will determine capacity position for the next five to ten years. The leaders who make the model shift now will hold the infrastructure advantage when agentic AI hits full enterprise production.
New Thinking for a New Era
The renaissance this series is named after is not a repudiation of what the industry built. The disaggregation era and the cloud era were genuinely productive. The business models, the engineering standards, the operational disciplines, the financial frameworks -- all of it has real value and much of it carries forward.
What does not carry forward is the starting question. And what must be recovered is the knowledge boundary.
C-level leaders in the data center industry need to ask different questions now. Not what power capacity can we sell, but what compute workloads are we building to serve. Not what is our construction cost per megawatt, but what is our cost per token delivered. Not how do we fill white space, but how do we design a facility that maximises the performance of the AI system it houses.
There is a harder question beneath those three. Who in your organisation can actually answer them? The past two decades of data center growth produced a project and development leadership class that is overwhelmingly drawn from MEP and CSA disciplines. That expertise is necessary and it is not going away. But it is no longer sufficient at the leadership level of an AI DC programme.
A head of development or project director who has no working knowledge of AI infrastructure -- who cannot read an NVLink topology, interpret a SuperPod specification, or reason about the thermal and power implications of a continuous full-load GPU cluster -- extends the blind spot directly into the delivery chain. The knowledge gap does not close because the C-level resolves to think differently. It closes when the people executing the programme understand the machine they are building for.
My own background crosses both sides of that boundary. I came from computer science and worked in IT before moving into computer room management and then into the data center industry. When I recently sat the NVIDIA NCA-AIIO to understand the AI infrastructure at the core of the AI DC era, I was not doing something unusual for someone with that background. I was doing what the machine requires. C-level leadership needs more people in project and development roles who are willing to make the same crossing.
The design and construction team needs to be in the room when the compute is specified, not handed a brief after the business case is closed. The operations team needs to understand what is running in the racks, not just what is happening at the facility layer. And the project leadership sitting between strategy and delivery needs the AI infrastructure knowledge to translate one into the other.
The C-level leader who makes this shift -- in thinking, in questions, and in team composition -- is not correcting a mistake. They are leading the renaissance. The machine is the building again. The knowledge boundary and the business model should both start there.
Next: Article 3 -- The AI DC Is Not a Data Center with GPUs



Comments