Data Center Networks
The Wizard is a connectivity nut. Unlike many, I don't believe cooling and power are the biggest challenges of the modern data center. The true challenge is connectivity. The world is becoming global faster than anyone imagined. The world is becoming virtualized and 'cloud-i-fied' at a crazy rate. I believe that enterprises of all sizes are getting ready to move connectivity to the top of the data center selection list.
Prospective clients are getting ready to start asking questions like:
- How many total carriers do you have that are lit and passing bits in your data center?
- I am in the Pac Rim, who are your top two Pac Rim carriers? Europe? LATAM?
- I have business in the SE US, what carriers do you have in that area with the best peering relationships and networks?
- I need a couple 1Gig links to these three carriers while we migrate data, then turn them, can you help me with that?
- What's your install SLA on multiple Gig IP connections? Hint - don't say 30 days
This list goes on but you get the point. The data center providers are chasing the cloud-providers as clients, and why shouldn't they? As the cloud grows, the providers will begin to eat up data centers.
But be warned data center providers, I predict that the buyer will stop wanting to hear this answer, "...Well these x carriers have fiber in our building. We are carrier-neutral so you can connect to anyone, we don't care, you need their phone number? And if the carrier isn't in our building, we will certainly allow them in for a modest monthly rack-fee and conduit fee. I am not sure who pays for the build though, but all carriers want to be in our building." Don't laugh. I have heard that statement dozens of times over the years, and I am sure it is still being said today.
As noted in the past, The Wizard loves to collect and read white-papers (mostly because I often need support of some of my opinions) and this one from HP seems to support my opening comments.
Building Virtualization-Optimized Data Center Networks.
Obviously this paper is about how HP devices will solve problems for the client, but the supporting information about data center traffic, etc. is very interesting.
In the Executive Summary the author makes this statement, "Server virtualization initiatives are reshaping data center traffic flows, increasing bandwidth densities at the server edge and pushing conventional data center networks to the brink. Hierarchical data center networks designed to support traditional client-server software deployment models can't meet the performance and scalability requirements of the new virtualized data center. Enterprises must implement flatter, simpler networks to support high-volume server-to-server traffic flows, and they must adopt new management systems and security practices to administer virtual resources and enable on- demand services."
The white paper describes a typical data center network as having three tiers, "A typical three-tier data center network is comprised of an access tier, an aggregation tier and a core tier." HP describes the core tier as, "...a layer of core switches or routers that forward traffic to an intranet, the Internet and between aggregation switches." HP and other manufactures can do a whole lot to help with their various boxes, but those core devices end up touching the world in the data center.
My point is that the customer has many ways to increase performance, improve latency and maximize uptime, one of which is to buy state-of-the-art gear from HP as this paper would like, but access to IP at the data center is equally important. All the great HP/Cisco, etc. gear in the world doesn't solve poor carrier access in the third-party data center, which harkens back to my opening statement. These enterprises ARE buying this cool stuff and want to get at the Internet from their provider. Corollary to the above, I also don't think the buyer has as much patience as he did five and ten years ago. I don't think buyers want to wait for builds and six-month delays They want it now.
The crux of this paper is to actually talk about east and west traffic movement in the data center, meaning fast performance between distribution devices and servers. What I'm implying is that some or all of that data has to move north and south (out to the internet). You can have the most high-performance data center architecture moving around INSIDE of the data center, but if that data has to hit the world (um, that's like everyone right?) then crummy connectivity just obviated their investment in the cool stuff.
The other take-away from this paper are the drivers for this increase/change, "...Enterprises are deploying new software application architectures and service delivery models to improve productivity and business agility, and leveraging innovations in server technology to reduce OPEX and CAPEX. The implementation of federated applications and on-demand service delivery models and the adoption of blade servers and server virtualization solutions are reshaping data center traffic flows, increasing bandwidth densities at the server edge and pushing contemporary data center networks to the limit."
This reshaping of east and west (inside the data center) traffic will have a direct impact on the demand for north and south traffic. What's the punch line? Data center providers of all sorts should be prepared for more and more emphasis to be placed on the data center network. Business drivers, cloud/virtualization, and improvements from technology providers are going to demand it.
Twitter - @DataBankWizard