top of page

The Data Center Primer Book Writing and Publishing Journey Article 1 of 10 – The Ingredients and Simmering (2012–June 2024)

  • Writer: datacenterprimerja
    datacenterprimerja
  • Feb 26
  • 9 min read

Updated: Mar 2

The Pull that started the Book Writing Journey

James Soh

At about the end of 2023, my team was readying the third data center building on our first campus and piling the foundation for a fourth building on a new land plot nearby. The pace and scale were accelerating. We were standardizing more, pushing harder into offsite MEP manufacturing, and shipping bigger, more modular assemblies than I had seen earlier in my career.


What really struck me was the contrast. On one side, hyperscale clients were already thinking in this kind of scale and standardization. On the other, many suppliers and specialist firms hadn’t yet felt the full impact of these shifts, and were still talking and planning as if the world was mostly enterprise data centers and small colo sites.


At the same time, it was getting harder to attract and retain people in data center operations. The facilities were getting bigger and more complex, but the pool of people with a broad, operator‑aware view was not growing fast enough.


That combination – the shift in scale, the lag in understanding among parts of the ecosystem, and the talent challenges on the ground – didn’t make the job boring. It did the opposite: it convinced me that the “simmering” data center book I had been thinking about could actually help. It could give operators, suppliers, newcomers, and non‑technical professionals a clearer map of what was happening and where they could add value. That was the push I needed to stop just thinking about the book and start writing it.

When I told a friend I was writing a data center book, his immediate reaction was: “Why not just use AI to write the data center book?” In 2024 until now, that is the natural question. Actually, I did. But it fell short.


AI can generate a “data center essentials” book in a few minutes. I spent 18 months writing the Data Center Primer book and I am more than glad I did it. Let me share the reasons and the journey.


Seeing the gap up close

The answer didn’t come from a spreadsheet. It came from rooms I’ve been in for years.

Project meetings where the engineering slide deck lost half the audience in three minutes. Sales calls where promises were made that didn’t quite match what the facility could safely deliver. Training sessions where smart people from operations, vendors, and sales admitted they’d been “winging it” when it came to the physical data center.


I taught data center classes on a part-time basis throughout Asia between 2012–2014, and later throughout my data center career when I met newcomers and experienced non‑technical professionals in investment, finance, or sales, I saw the same pattern repeated across different regions and job roles. People weren’t short of intelligence or motivation. They were short of a coherent, operator‑aware, business‑friendly way to understand the facility they were working around.


Most books and courses either went very deep into design, or stayed very high‑level and cloud‑centric. The people in my classes sat in the middle: they needed enough physical‑layer clarity to ask good questions, challenge assumptions, and make better decisions – without becoming designers themselves.


That was the gap I kept seeing, class after class, conversation after conversation.


The AI attempt – and why it fell short

So when my friend said “just use AI,” I tried.

I fed an outline into a model and out came a 60–70 page “data center fundamentals” draft. It was competent. It had chapters, headings, bullet points, and all the right keywords. Technically, it wasn’t wrong.


But it was dead on arrival.


It read like a slightly warmer version of a vendor whitepaper: matter‑of‑fact, generic, and devoid of the small, specific stories that make things “click” for someone who actually works in or around a data center. There were no commissioning scars, no awkward conversations between sales and ops, no “we thought this design decision was clever until year three” moments.


And that’s when something important landed: the operator’s view – the view from inside the building, after the ribbon‑cutting – is still not well represented in the public material that models are trained on. If I wanted those voices in the book, they had to come from real experience, not from a prompt.


Why transformation is the key

That AI draft taught me something important: the real value is not in producing text, but in transforming it.


AI is very good at assembling and presenting what is already known. It can remix definitions, best practices, and checklists into something coherent and fast. That is useful – especially for learning and for building a first scaffold.


But transformation is different. Transformation is where a human expert decides what matters for a specific reader, in a specific context. It is choosing which details to leave out, which trade‑offs to highlight, where to slow down, and which operator story or diagram will make the idea “stick” for someone who has to use it in their work.


That is the part AI couldn’t do for me. It could generate a fundamentals booklet. It could not decide what a data center operator, salesperson, or investor most needed to hear, in what order, and with which scars attached. That is the work I chose to spend 18 months on – and it is the idea that underpins the rest of this series.


Why June 2024 was the moment to start

By mid‑2024, the external pressure had also changed.

AI and high‑density workloads were pushing power and cooling in new directions. Hyperscale and large campuses were becoming the default reference point. Teams were getting younger, more distributed, and more mixed – operations, software, cloud, real estate, finance, all in the same room.


At the same time, the usual reasons “these books don’t exist” were still true:


  • Data center work is intense; veterans are busy running sites and projects, not carving out years to write.

  • Many senior people came up through finance, enterprise IT, or design, and don’t naturally write from the operator’s floor.

  • Traditional publishing still doesn’t get excited about a niche topic like “data center operations for non‑design professionals.”


Put simply: the need was rising, but the odds of “someone else will write this” still felt low.

June 2024 was when those lines crossed for me. I stopped telling myself “one day” and treated the book as a real 18‑month project – something that would sit alongside client work, not after it.


What I actually did in the simmering years

I have kept notes, as far back as 2009, on my Microsoft OneNote (which unfortunately kept having syncing issues and lost some of my notes that I have to recall from memory and type into Google Keep).


Looking back from 2012 to that point, I can see that I had been noticing trends and distilling information. Some examples of information are:


  • Data center definitions that are too complicated to explain to non-technical professionals and newcomers.

  • Sharing that broad knowledge across the data center value chain is important for the counterpart to better appreciate where they can add value or find a niche combination.

  • Questions that kept recurring in different forms in training sessions and chat or encounters. For example, having to repeatedly share that MEP infrastructure (equipment, cabling, pumps, piping etc) is a larger portion of a data center facility capital cost.

  • A swiss-cheese model that later evolved into my Data Center infrastructure stack Diagram when I repeatedly explained the top-down Cloud-DC Operator-Utilities concepts.

  • Big change in scale enables standardization and offsite manufacturing (OSM).


Those became the raw ingredients. I wasn’t starting from a blank page in June 2024; I was finally admitting that the last decade had been field research for a book I hadn’t yet committed to write.


How you can start capturing your own material and testing your book ideas

If you’re at a similar point – feeling a book “simmering” but not yet formalized – a few things from this stage are useful:


  • Maintain a notebook of recurring questions you receive in meetings, classes, or informal chats. Group similar questions; the largest clusters are strong candidates for future chapters. Write anywhere, jot down an idea however rough it is – you can always come back to refine or enhance it later. If you don’t write it down, it never existed.

  • Write first. Combine related ideas as they come, but leave any major reorganisation for later when you are planning your book chapters (especially if you are writing a non‑fiction book).

  • Go online to learn how other authors gather and develop ideas for their books. Look at books in the same category as yours; study their structure, tone, and what they do well or poorly. Notice the good and the bad – and aim to do better.

  • Archive materials that worked well: slide decks, diagrams, email explanations, internal notes. Store them in one dedicated folder and label them by topic, audience type, and outcome.

  • When you test AI for your topic, compare its output against your own explanations. Highlight where your version adds context, judgment, or personal experience that the model lacks, or where it doesn’t propose what you consider a key transformation insight. Let all of the above add up to differentiate your content. Test these content ideas on others and observe what they remember, repeat, or act on.


Do some or all of the above add up to build your book content. Test these content ideas on others and observe what they remember, repeat, or act on.


What’s next in this series

This first part is about recognizing the gap, testing the AI shortcut, and deciding that the book was worth 18 (original plan was for 10) months of real work.


In the rest of this series, I’ll walk through the key stages of that journey:


  • Article 1 – The Ingredients and Simmering (2012–June 2024)

    • How the idea for The Data Center Primer formed over more than a decade.

    • The early note‑taking, training questions, and operator experience that became “ingredients.”

    • Why the book didn’t start as a clean project brief but as a long simmering process.

  • Article 2 – Dead on Arrival: What Version 0.1 Taught Me About Real Readers

    • The 60–70 page AI‑generated “data center fundamentals” draft and why it failed as a real book.

    • How reading it through the eyes of operators, sales, investors, and newcomers exposed the “average reader who doesn’t exist.”

    • The shift from straight AI output to writing from an operator’s perspective, with AI only as checker.

  • Article 3 – The Writing Environment and Modern Writers’ Tools

    • Borrowed rooms, mahjong tiles, coworking spaces, and hotel desks as the real writing environments.

    • Why the manuscript moved from OneNote to a single Word file, with Google Keep as the capture inbox.

    • How AI tools hit a context wall at book length and became chapter‑level assistants instead.

  • Article 4 – The Great Reset: Escaping the Depth Trap

    • How design, electrical, and mechanical chapters grew too deep and unbalanced the book.

    • The realisation that the problem wasn’t sentences but the kind of book you were writing.

    • The decision to reset scope and depth so the book stayed useful to its intended readers.

  • Article 5 – Mapping the Visible and Invisible Book

    • Page layout, spacing, headings, and fonts as tools for making dense material readable.

    • How diagrams, models, and the “infrastructure stack” were chosen and placed.

    • The invisible decisions (what not to diagram, what to leave to experts) that keep the book from overwhelming newcomers.

  • Article 6 – You Are Not Alone: Designers, Freelancers, and Sifus

    • The outside help you brought in: cover design, diagrams, editing, and specialist checks.

    • How you chose what to outsource and what had to remain in the operator’s voice.

    • Lessons on working with freelancers and friends without losing coherence or control.

  • Article 7 – The Gatekeeper Rejection and the Pivot to Power

    • Encounters with traditional publishing or gatekeepers and what their feedback revealed.

    • Why you decided to position the book differently instead of diluting it to fit a template.

    • How this fed into your broader view of power: who gets to publish what, and on whose terms.

  • Article 8 – Repurposing Without Giving It Away

    • Designing the book so its ideas can become training, talks, and further writing.

    • How you thought about protecting IP: what appears in public posts vs what stays “book‑only.”

    • Practical approaches to reusing models and cases without exposing client or company specifics.

  • Article 9 – Owning the Artifact: Print, Formats, and Bind‑Your‑Own

    • Choices between platforms and formats for a technical, diagram‑heavy book.

    • Print options, quality considerations, and cost trade‑offs, including “bind your own” experiments.

    • Why having a physical artifact matters for readers, training, and your own sense of completion.

  • Article 10 – Working Like This on Your Next Book

    • How to ship an honest, technically serious book alongside a full working life.

    • The working rhythm: protecting focus, restarting after stalls, and knowing when a chapter is “enough.”

    • How you plan to use the same way of working on the next edition and the next book—and what a reader can borrow for their own project, even if they never get a sabbatical or a studio.


If this journey resonates with you and you work in or around data centers, you may find The Data Center Primer a useful companion as you team your colleagues and staff to navigate planning phase, data center build project phases, and operations. Amazon Bookstore link: https://a.co/d/0ijcZbZE

Comments


bottom of page