LATEST NEWS

DataBank Raises $1.1B in Hyperscale Asset Securitization. Read the press release.

Why Enterprise AI Infrastructure is Going Hybrid – and Geographic
Why Enterprise AI Infrastructure is Going Hybrid – and Geographic

Why Enterprise AI Infrastructure is Going Hybrid – and Geographic

  • Updated on October 15, 2025
  • /
  • 5 min read

The data and insights featured below come from DataBank’s latest research report, Accelerating AI: Navigating the Future of Enterprise Infrastructure,” which focused on enterprise AI adoption, ROI, and infrastructure challenges.

In our previous articles, we explored how enterprises are achieving ROI from AI and what’s blocking those that haven’t gotten there yet. Now, in this third article in the series, we’ll examine where companies are actually running their AI workloads – and how that’s about to change dramatically.

The infrastructure choices organizations make for AI workloads have significant implications for performance, cost, security, and compliance. Our survey reveals that while cloud remains important, a major shift toward hybrid and distributed infrastructure is already underway.

Today’s Reality: Cloud Dominates, But Not Exclusively

Currently, nearly two-thirds (64%) of AI workloads run in either public cloud (49%) or private cloud (15%) environments. Another 15% favor third-party SaaS/web-based platforms like Salesforce.

On-premises and colocation data centers represent a much smaller portion today – just 22% of AI workloads according to our survey (15% on-premises/company-controlled and 7% colocation data centers).

This makes sense. Public cloud and SaaS solutions offer easy starting points for AI adoption, with minimal upfront investment and quick deployment times. However, as AI implementations mature, the limitations of a cloud-only approach become apparent.

Tomorrow’s Strategy: The Hybrid Shift

Looking ahead five years, 96% of respondents expect their AI infrastructure distribution to change. Only 4% report “no significant changes planned.”

Over half of respondents are planning substantial expansions in physical infrastructure:

  • 31% will build more AI-dedicated private/on-premises data centers
  • 22% plan to expand colocation data center deployments

Meanwhile, 43% still expect to increase their reliance on cloud for AI workloads, confirming that the future isn’t about choosing between cloud and physical infrastructure. It’s about strategically combining both.

What’s Driving the Hybrid Approach?

When we asked which factors are most critical in choosing between cloud and colocation for AI workloads, three priorities emerged:

  1. Security and compliance requirements (37% rated this most critical): Colocation facilities like those operated by DataBank provide robust physical security features including 24/7 surveillance, restricted access controls with biometric scanners and security guards, and secure equipment cabinets. They’re also certified with various industry and regulatory compliance standards such as ISO 27001, SOC 1/2, and PCI DSS.
  2. Performance considerations (33%): Low-latency requirements for advanced AI applications demand infrastructure closer to where data is generated and consumed.
  3. Scalability and flexibility (27%): Different workloads have different needs; hybrid approaches allow organizations to optimize each application’s infrastructure placement.

As Philips’ Chief Innovation Officer Shez Partovi noted in the research process: “In a de-globalized world, there is an increasing need to ensure that data is housed and processed in compliance with the specific country or jurisdiction, which is leading to a more decentralized approach.”

Geographic Distribution: AI Is Going Local

Perhaps, the most striking finding in our survey is the extent to which AI is driving geographic expansion of infrastructure. Over three-quarters of respondents (76%) expect their infrastructure to expand geographically over the next five years:

  • 44% need expansion to be closer to data sources
    32% need expansion to be closer to end-users

Only 11% anticipate consolidation into fewer, but larger, data hubs, while just 13% expect no significant impact on geographic distribution.

This geographic dispersion addresses multiple needs simultaneously. For compliance, it enables organizations to meet data sovereignty requirements by storing and processing data within specific countries or jurisdictions. For performance, it reduces latency for real-time AI applications like autonomous vehicles, smart city sensors, and industrial IoT.

“We operate in every corner of the world and there are obviously sovereign data requirements,” said Chris Bedi, ServiceNow’s Chief Customer Officer. “To make sure we are serving customers in each region, we need to have a geographically dispersed data center strategy.”

Training vs. Inference: Different Workloads, Different Strategies

Interestingly, while inference is becoming more distributed, AI training is moving in the opposite direction. Nearly three-quarters of respondents (73%) said that “training will be centralized while inference will be more distributed.” This split makes technical sense.

Training requires massive computational power and benefits from centralized, GPU-rich environments. Inference, by contrast, needs to happen close to users and data sources to minimize latency and ensure compliance with local regulations.

Challenges in Scaling Geographically

Expanding AI infrastructure across multiple regions isn’t without challenges. When we asked about major obstacles to geographic scaling, respondents cited:

  • Security risks in distributed AI (27%)
  • Access to GPU infrastructure for training (25%)
  • Data sovereignty and compliance (23%)
  • Network and latency concerns for inference (20%)
  • Cost of infrastructure in key regions (19%)
  • Power and energy constraints (18%)

According to DataBank CEO Raul Martynek, infrastructure readiness varies significantly by location. “It’s not just access to GPUs that matters. You also need somewhere to put them. Not all data centers can accommodate the power and cooling requirements of these infrastructures, and new facilities take time to deploy. That’s why we developed a Universal Data Hall Design for all our new data center builds. It allows us to configure each data hall more quickly in a new facility for whatever infrastructure need that market might have – from traditional air-cooled configurations capable of 15-30kW per cabinet, to liquid-cooled systems capable of 100-200kW per cabinet.”

The Bottom Line

The future of AI infrastructure isn’t an either/or proposition between cloud and physical data centers. It’s a strategic hybrid approach that places workloads where they perform best, meet compliance requirements, and deliver optimal cost-efficiency.

Less sensitive workloads leveraging public datasets can reside in public cloud. More sensitive applications requiring stringent security, compliance, or low-latency performance are increasingly deployed in colocation or private data centers—and those deployments are spreading geographically to be closer to the data and users that need them.

In our next article, we’ll explore how AI strategies are improving with approaches such as blending off-the-shelf applications, custom solutions, and tailored deployment models.

This is the third in a five-part series examining key trends in enterprise AI adoption based on our 2025 AI infrastructure survey. You can read our previous posts on AI ROI findings and what’s blocking AI success, or download the full report, “Accelerating AI: Navigating the Future of Enterprise Infrastructure.”

Get Started

Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch.