LATEST NEWS

DataBank and Goodman Group Partner to Open Los Angeles Data Center. Read the press release.

How AI is Driving Data Center Infrastructure Evolution
How AI is Driving Data Center Infrastructure Evolution

How AI is Driving Data Center Infrastructure Evolution

Loading audio...

How AI is Driving Data Center Infrastructure Evolution

Artificial intelligence workloads are reshaping data center infrastructure in ways the industry couldn’t have anticipated, and this transformation continues to evolve rapidly as we learn new things about AI. With large-scale deployments still emerging, each implementation teaches us more about how to manage data centers for these demanding workloads.

The challenge begins with fundamentally different power demands. Until now, most enterprise applications used relatively steady power consumption to run email servers, host websites, serve up real-time digital experiences, and manage customer transactions.

AI changes this equation completely. We are seeing that these workloads now experience intense oscillation as neural networks dynamically allocate computational resources across thousands of processing cores and activate different pathways during training cycles or inference tasks.

The result? Power consumption that can fluctuate by more than 50% in mere microseconds. This creates violent power spikes that rise and fall with each computational task and happen much faster than traditional infrastructure was designed to handle.

Power Infrastructure Under Pressure

The consequences can be immediate and expensive. When AI workloads spike power consumption in microsecond bursts, uninterruptible power supply (UPS) systems can fault and dump the load directly onto backup batteries.

This constant battery cycling from microbursts can be devastating where it occurs, potentially reducing battery lifespan by 50% or more and transforming predictable three-to-five-year replacement cycles into 18-month emergency replacements. In response, UPS vendors are actively exploring capacitors and other alternatives. This creates an evolving landscape of innovation for data center equipment as the industry continues learning how to address these unique challenges.

For data centers already operating with constrained budgets, these unexpected maintenance costs and the need for infrastructure upgrades can create overwhelming financial pressure.

The Cooling Conundrum: When Minutes Become Seconds

The cooling challenges are equally dramatic. Everything changes when power density hits approximately 40 kilowatts per rack—the practical limit for traditional air cooling. Beyond that threshold, liquid cooling becomes mandatory.

Yet, AI deployments aren’t stopping at 40kW. Industry roadmaps show NVIDIA and other vendors targeting 600 kilowatts to one megawatt per rack. This means that the vast majority of each rack’s power consumption will require direct liquid cooling to computer chips.

This triggers a domino effect of infrastructure challenges. Existing buildings often lack the physical piping infrastructure needed for liquid-to-chip cooling systems. Retrofitting buildings with chilled water systems, coolant distribution units, and supporting infrastructure represents a massive capital expense that exceeds many facilities’ budgets, while others just don’t have the space for such extensive upgrades.

Perhaps, most critically, the margin of error has collapsed. Traditional data center workloads gave operators 20 to 30 minutes to respond to cooling system failures before reaching critical temperatures. High-density AI workloads compress that window to two minutes or less. The thermal runaway risk is so severe that cooling systems now require their own backup power, thermal storage tanks acting as batteries for cooling, and unprecedented levels of redundancy.

Even when operators try to build adequate thermal storage, physics works against them. Every two minutes of cooling backup requires massive water tank scaling, and most existing facilities simply don’t have the physical space for the storage volumes that would provide meaningful protection.

These infrastructure requirements would have seemed excessive just a few years ago and may still seem excessive to those who haven’t experienced a cooling failure in a high-density AI environment.

Building for the AI Future: Strategic Infrastructure Solutions

Rather than being caught off guard, innovative data center operators are already planning and building the specialized infrastructure that AI workloads demand.

The industry response is moving on multiple fronts simultaneously: making hard economic decisions about retrofits, designing purpose-built AI facilities, and pioneering modular approaches that deliver flexibility without sacrificing efficiency.

The economics of retrofitting existing infrastructure are driving operators toward a different approach. While some facilities can accommodate AI workloads with targeted upgrades, the real innovation is happening with purpose-built facilities designed from the ground up for AI requirements.

These new data centers are optimized specifically for AI workload characteristics, incorporating power delivery systems engineered to handle rapid fluctuations and cooling architectures that scale from air to high-density liquid cooling as needed. The designs prioritize modularity and flexibility, allowing operators to configure spaces for different AI applications without traditional one-size-fits-all constraints.

The performance gains are significant. These facilities achieve higher computational density per square foot, more efficient power utilization, and cooling systems that respond dynamically to workload demands while incorporating redundancy designed specifically for AI’s shortened response times.

Modular Construction: Building as You Grow

Modular design is emerging as a key sustainability and economic strategy. Rather than constructing massive facilities all at once, operators can now build incrementally based on actual demand. Standardized components deliver economies of scale, similar to apartment building construction where identical units drive down per-unit costs. This also allows operators to pack more computing power into buildings without expanding physical footprints to city-block proportions.

From a sustainability perspective, this represents a practical balance. While reusing existing infrastructure would theoretically be more environmentally friendly, legacy data centers simply cannot handle modern AI workload requirements cost effectively. The new modular designs are inherently more efficient and reduce cooling dependencies, creating long-term environmental benefits that offset the initial construction impact.

Rethinking Efficiency: More Computing, Not Just More Power

Perhaps, most importantly, the industry is beginning to properly measure the true efficiency gains that AI workloads deliver. While it’s true that AI systems draw more power than traditional computing, they deliver significantly more computational capability per kilowatt consumed. Graphics Processing Units (GPUs) optimized for AI workloads can process far more operations using the same energy as traditional CPUs handling equivalent tasks.

This efficiency paradox creates powerful business incentives for continued optimization. Data center operators profit from maximizing the computational output per unit of power consumed, naturally aligning business objectives with energy efficiency goals. As operators deploy more AI-optimized infrastructure, they’re achieving better overall energy utilization compared to traditional computing approaches.

The Infrastructure Evolution Ahead

The transformation of data center infrastructure for AI represents both an immediate challenge and a long-term opportunity. Data center operators succeeding in this transition understand that AI workloads demand fundamentally different approaches: from managing microsecond power fluctuations and less than two-minute cooling windows to building purpose-built facilities with modular designs that scale incrementally.

The future won’t be exclusively AI-driven since traditional workloads will persist because AI infrastructure remains expensive. However, enterprises are strategically shifting portions of their operations to AI as they recognize efficiency gains that offset higher infrastructure costs. The data centers that emerge will be more efficient, more flexible, and better positioned for whatever computational demands come next.

About the Authors

DataBank Names Joe Minarik as Chief Operating Officer

Joe Minarik

Chief Operating Officer
Joe Minarik is DataBank's Chief Operating Officer and is responsible for all data center operations, engineering, construction, managed services, and IT operations.
More about author
Jenny Gerson senior director of sustainability

Jenny Gerson

Jenny Gerson, Senior Director of Sustainability
Jenny Gerson, Senior Director of Sustainability at DataBank, leads ESG initiatives, aiming for net zero scope 1 and 2 emissions by 2030. With 20+ years in sustainability and 10+ in data centers, she specializes in corporate sustainability, cleantech research, and environmental management.
More about author

Share Article



Popular Categories

Get Started

Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch.