Data Center Knowledge published a bylined article from DataBank COO Joe Minarik examining power supply challenges as data centers scale to meet demands from high-performance computing and artificial intelligence workloads. Power requirements have escalated dramatically, with AI applications and HPC clusters now requiring 50 to 100kW per cabinet compared to just 10 to 15kW for basic requirements a year or two ago.
Individual HPC computing systems can consume up to 13 megawatts while Exascale systems draw 25 megawatts or more. Supercomputers use up to 30MW annually, consuming as much power as a small city. If Google applied AI to its nine billion daily searches, the company would require 29.2 terawatt hours annually, equivalent to Ireland’s total electricity consumption.
“The demand for HPC resources and generative AI is sure to increase. So how will data centers answer the call? The answer lies in addressing both sides of the supply and demand equation.”
— Joe Minarik, COO of DataBank
Minarik outlines solutions including closer collaboration between data center operators and utility partners, on-site power generation through substations and renewable sources, expedited permitting for transmission lines, and advanced liquid cooling technologies. He emphasizes that enterprises deploying HPC and AI workloads should seek colocation partners with financial stability, operational discipline, and proven track records in high-density power environments.
For the complete analysis, read the article now.
Share Article
Discover the DataBank Difference today:
Hybrid infrastructure solutions with boundless edge reach and a human touch.
Tell us about your infrastructure requirements and how to reach you, and one of team members will be in touch shortly.
"*" indicates required fields
Let us know which data center you'd like to visit and how to reach you, and one of team members will be in touch shortly.
"*" indicates required fields
"*" indicates required fields