top of page

AI-ERA DATA CENTERS IN SOUTHEAST ASIA – PART 3 OF 6

  • Writer: datacenterprimerja
    datacenterprimerja
  • Feb 26
  • 11 min read

How AI Tenants Read Your Facility

James Soh

Every data center developer in Southeast Asia eventually faces the same moment: a technically sophisticated AI client walks your campus, reads your design and your operations, and makes a judgment your lease team will not fully hear until weeks later. The operations supervisor knows which hall isn’t quite ready. The project manager watches the client’s engineers slow down at the battery room. The supplier notices their equipment being quietly photographed. Everyone in the building recognises the moment — but rarely does the organisation discuss what was actually being read, and what that judgment means for the capital already committed and the decisions still ahead. This article is about closing that gap.

Parts 1 and 2 looked from the inside out: the commitments you lock into speculative builds, and the operations and HR structures needed to run AI-density, battery-heavy halls. They framed AI-era data centers through three lenses – Design for Safety, Design for Operations, and technology relevance. In this Part 3, we stay with that framework but switch seats: technology relevance shows up from the client side, in how AI tenants read those same design and operations choices when they decide where to land workloads.

———

1. How AI clients actually shop for capacity

When an AI tenant comes to the table, they usually have two filters running in parallel.

The first is the capacity-and-timeline filter. They ask how much power you have that is genuinely available now, how much can be brought online within the next 6–10 months, and what that looks like at hall or zone level rather than just at the campus headline. Sites that cannot meet those windows on paper, or that rely on very uncertain permits or grid upgrades, drop down the list quickly. This plays out differently depending on where you sit in the region: Singapore’s constrained allocations, Johor’s rapid TNB build-out, Batam’s grid certainty questions beyond 2028, and Thailand’s Eastern Economic Corridor each present AI clients with a different version of the same filter. Both Johor and the EEC are under real capacity pressure today, but that will ease; Singapore’s allocation constraints are structural and deliberate rather than temporary; and Batam’s grid position requires the most careful ground-truthing of all. In every case, published capacity claims and current headlines are rarely the same thing as what a site can actually deliver on a given timeline or eventual full build capacity.


The second is the AI-workload filter that only comes into play once power and timing look workable. At that stage they move from “how many megawatts” to “what kind of megawatts”: can your halls, cooling systems, and protection schemes support their AI loads without repeated throttling, nuisance trips, or long recovery every time something goes wrong.


The people running these filters are not only commercial or real-estate leads. They bring infrastructure and reliability engineers who know how to read one-lines, floor plans, and maintenance practices, whether they are assessing a full campus for multi-phase expansion or a smaller AI cluster in an existing hall. Those engineers compare what you show them against internal standards and against problems they have seen elsewhere – from overheated aisles and liquid-cooling issues to battery layouts that made everyone nervous.

On a first technical visit, they pay close attention to where you take them, which halls you emphasise as “ready” versus “future phase”, and how directly your operations leads answer questions about density limits, cooling modes, incident history, and safety envelopes. In their notes, your campus stops being just “X megawatts available” and becomes a specific set of halls and timelines they may or may not be willing to trust for their AI roadmap.


Not every deal follows a long, methodical evaluation. For some AI clients, the global reputation of a large data center operator, or prior experience in other regions, carries enough weight that they are willing to move fast and sign after a first visit. In those cases, the technical walk-through is less about discovering issues and more about confirming that what they already believe about your organisation – design discipline, operations maturity, and regulator relationships – also shows up on the ground in this specific campus.

———

2. The three lenses: density, cooling, batteries

Once power and timelines clear the first filter, AI clients start looking at your site through three linked lenses: how much density you can really support, how you handle the heat, and where the battery risk sits. They are working out not just “can I land here” but “which halls, for which workloads, at what risk.”


Density and power behaviour.

On paper, many sites now claim support for 40–80 kW per rack. What AI clients want to see is how that looks in practice: which halls are actually built and powered for those numbers today, what derate rules apply, and how much headroom is left once diversity and concurrent maintenance are taken into account. They ask how power is allocated – per rack, per pod, per hall – and how you handle sharp swings when synchronised training jobs ramp up and down, because those swings determine how often protections will trip and how fragile maintenance windows become.


Cooling and thermal headroom.

The second lens is whether your cooling story matches those density claims. Clients look for a credible path from air-only halls to liquid-assisted or direct-to-chip cooling in specific spaces, with real capacity numbers and project timelines instead of generic “liquid-ready” labels. They want to understand the margin between design conditions and how you actually operate: if a few things go off-nominal – a partial failure, a warmer day, a mis-set valve – do their GPUs keep running, or do you quickly hit thermal limits and start throttling workloads to stay safe.


Batteries and ESS risk.

The third lens is where energy storage lives and what that means in an incident. Clients map where your batteries and ESS sit – central UPS rooms, in-hall systems, or tenant-provided sidecars – and how that interacts with zoning, fuel load, and suppression choices. They are trying to see which halls will remain usable for dense AI if a battery event occurs, how long it would take you to recover capacity, and whether some layouts create “single points of regret” where one fire compartment can take out too much critical load at once.


Seen together, these three lenses let an AI tenant translate a generic “AI-ready campus” pitch into a concrete map: which halls they are willing to use for AI training clusters, which they might reserve for lighter inference or non-GPU workloads, and which halls they will not consider for GPUs at all because of power, cooling, or ESS constraints.

———

3. What gives you away: signals AI tenants look for

For many Southeast Asia sites – especially newer players – the proof is not “here is my live liquid hall,” but “here is how far we have already de-risked land, power, build, and delivery delay.” AI tenants know it is expensive to build and equip a full AI hall speculatively, so they judge you first on whether the hard prerequisites are real and executable, not on whether you can show them someone else’s running cluster.


The MEP equipment supply chain and ecosystem that AI clients navigate — and through which they read your facility. The engineers walking your campus recognise these vendor relationships from their own procurement history. Source: Data Center Primer, James Soh (ISBN 9789819439768).

The first signal is how solid your land and power position is: land-lease or land-purchase terms, zoning and key permits, and a power-supply agreement with the authority that actually has jurisdiction. What that looks like varies markedly across the region. In Singapore it means a formal allocation under a government-governed process. In Johor it means a signed TNB agreement tied to specific substation capacity. In Batam it means PLN milestones and a credible view of grid headroom through the period the client needs.


In Thailand’s EEC it means BOI approvals and a clear path through the Direct PFA framework. In each case, the signal is the same: you have done the hard work before asking the client to commit. The second signal is build-readiness: design far enough along to award, a clear path to appointing the main contractor, and a programme that ties civil, MEP, and power milestones to the capacity they need now and in the next 6–10 months. On top of that, contract structures that include meaningful penalties or remedies for delay often give more comfort than any “AI-ready” label on a slide.


Because live AI halls are usually under NDA and client control, most operators cannot walk prospects through an active high-density room even if they have one. In those cases, AI tenants listen to how your team talks about existing complex projects in general terms, and how clearly you explain who will own their hall, how incidents will run, and how safety and compliance will be built into change decisions once they sign.


Operations and governance signals still matter even before a hall is built out. When clients ask, “If we land 40–60 kW racks here, who will own that hall, and how will incidents run?”, they are looking for clear answers on role ownership, escalation paths, and how safety and compliance are embedded in change decisions. If those answers are crisp and consistent, they infer that the same discipline will apply once their space goes live.

———

4. How AI tenants segment your campus – and what you should do about it

From the AI client’s point of view, a campus is never “one big block of capacity.” They mentally slice it into different buckets: halls they are willing to trust for AI training, halls they might use for lighter inference or supporting workloads, and halls they will not put GPUs into at all because of power, cooling, or ESS constraints.


Mature operators do this segmentation explicitly. They know which halls can realistically support 40–60 kW racks in the next 6–10 months, with a credible path to liquid-assisted cooling and clear ESS governance; those become their internal “AI halls,” even if the branding is not public. Other halls are positioned honestly as conventional spaces with tighter limits, suitable for non-GPU IT or lower-density workloads. Being clear about these internal tiers – and reflecting that in how you speak to AI clients – is often more persuasive than presenting the entire campus as uniformly “AI-ready,” because it matches how those clients already think and decide.



For operators, the practical takeaway is simple but uncomfortable. AI tenants in Southeast Asia do not buy an “AI-ready” label. They buy a combination of de-risked land and power, believable delivery timelines for capacity-now and capacity-in-6–10-months, and a campus map that is honest about which halls can really carry AI workloads. Making that de-risking visible, segmenting your campus on purpose, and aligning your operations and contracts to what you promise are what turn an AI story into something AI clients are willing to sign against.

———

5. Reading the client the way they read you

Everything in this article so far assumes the AI client standing across the table from you will still be there in three years. That assumption deserves scrutiny.

AI tenants have spent sections 1 through 4 reading your facility for risk. The question this article has not yet asked is what you are doing to read them with the same discipline. Because the clients who are most demanding – the ones whose requirements are driving your most expensive design decisions right now – are also the most exposed to their own cycle. A hyperscaler in an AI buildout is not the same credit risk as a hyperscaler running mature cloud infrastructure. Their demand is real, but it is upstream-dependent: on GPU supply, on model economics, on investor appetite for AI capex, and on whether the specific workloads they are scaling in 2025 still require the same infrastructure in 2028.


The trap for operators is that the more precisely you optimise for these clients – liquid cooling, high floor loading, in-hall ESS, the full specification – the harder your facility becomes to re-tenant if they pull back. You have built something ideal for a narrow band of clients at a particular moment in a technology cycle. Conventional enterprise IT will not easily backfill those halls; the economics are different, the density assumptions are different, and the fit-out is already wrong. The next generation of AI clients may have moved to different density profiles and cooling topologies. You are most exposed at exactly the point where you feel most successful: full occupancy with premium AI tenants on terms that justified the capex.


The concentration risk sits inside this dynamic. Many operators in Southeast Asia are currently building or converting capacity around a small number of hyperscalers and large model companies — in Johor’s industrial parks, Batam’s free-trade zones, Thailand’s Eastern Economic Corridor, and within Singapore’s tightly governed allocations. When those clients are expanding, they absorb capacity fast and pay premium rates. But AI investment cycles do not run on the same rhythm as traditional IT demand. They are driven by model generations, training runs, and the capital markets appetite for AI infrastructure. When a cycle turns – when a training programme completes, a model ships, funding cools, or the next generation of hardware demands a different facility spec – clients who were absorbing megawatts aggressively can go quiet with very little warning. The operator who has concentrated design decisions, capital, and commercial terms around one or two such clients has made a bet that may be invisible in a standard tenancy analysis but is very visible once re-letting begins.


The practical counter-moves are real and within reach. They are worth thinking about at three levels.


At the design level,

the most durable facilities are those whose core structure and power paths can serve AI loads without being irreversibly committed to them. That means building floor loading, power distribution, and structural grids to AI-capable standards while keeping cooling topology flexible enough to serve conventional IT at lower density if needed. Not every hall needs to be fully liquid-cooled on day one; some halls designed to a high base specification but not yet fitted out for direct-to-chip cooling retain optionality that fully committed halls do not.


At the commercial level,

lease structures that price in cycle risk are more valuable than headline rates that assume a single tenant type for the full term. Longer minimum terms with meaningful penalties, step-down provisions that reflect changing density requirements over time, and explicit re-letting cost sharing are all levers that experienced operators are already using in more mature markets. In Southeast Asia, where many AI leases are still being written on relatively straightforward terms, there is room to be more deliberate.


At the investment and governance level,

the distinction between an AI-capable facility and an AI-dependent one is where investor risk actually lives, and most investment memos in the region do not yet make it clearly. An AI-capable facility can host demanding workloads but can also be repositioned at manageable cost if demand shifts. An AI-dependent facility has made design, capex, and commercial commitments that are very difficult to reverse without significant write-down. Co-investors and boards asking whether AI data center investment is too risky are often really asking which side of this line their project sits on, and the honest answer requires more granularity than “AI-ready” provides.


In technical due diligence work across this region, the power question has consistently been the most productive place to start — not because it is the most complex, but because it is where the gap between what a developer presents and what the ground can actually support is widest and most consequential. A site whose power position depends on permits still in flight, grid upgrades not yet funded, or utility commitments made by authorities without full jurisdiction is not just a delivery risk. It is a signal about how the rest of the investment thesis has been assembled. That same gap is where cycle risk first becomes visible: if a developer has been optimistic about power, the probability is high that they have also been optimistic about tenant demand, re-letting scenarios, and the durability of the design choices made to attract the most demanding clients on the current wave.


AI tenants are experts at reading your facility for concentration risk — which halls they will trust, which layouts create single points of regret, which operators are genuinely AI-ready and which are positioning. The question is whether you are reading their business, their cycle exposure, and their re-tenanting scenario with the same rigour they are applying to your halls. The operators and investors who do that work before signing will own assets that perform through the cycle. Those who do not will discover the answer at the next lease renewal.

———

Looking ahead to Part 4

We return to one of the most sensitive pieces behind the campus map discussed in this article: batteries and ESS. Battery Strategy as an Asset and Infrastructure Decision will look at how chemistry choices, room layouts, in-hall ESS, and regulatory constraints shape which halls can support which AI tenants – and why those are not just engineering details, but board- and leasing-level decisions.

Recent Posts

See All

Comments


bottom of page