Data Center World Interview: Density, Efficiency and Economy of Scale

In recent years, the conversation in the data center space has been shifting to density efficiency and supporting economy of scale in a sustainable manner. Check out a discussion between Bill Kleyman, EVP of Digital Solutions at Switch and Eric Jensen, VP/GM of Data Aire, where they predominantly focus on the topic of how increasing densities are impacting data centers.

Current Trends in Precision Cooling

Bill Kleyman:
It’s fascinating, Eric, to look at what’s been happening in the data center space over the past couple of years, where the importance and the value of the data center community has only continued to increase.

And a big part of the conversation, something that we’ve seen in the AFCOM reports published on Data Center Knowledge, is that we’re not really trying to build massive, big facilities. We’re trying to fill the buckets that we already have.

And so, the conversation is shifting to density efficiency, being able to support an economy of scale but also the ability to support that in a much more sustainable manner.

So that’s where we really kind of begin the conversation. But Eric, before we jump in for those that might not be familiar, who or what is Data Aire?

Eric Jensen:
Data Aire provides environmental control solutions for the data center industry, specifically precision cooling solutions, through a variety of strategies that are employed, whether DX or chilled water — lots of different form factors, one ton to many tons.

Bill Kleyman:
Perfect. You know the conversation around density. I’ve been hopping around some of these sessions here today at Data Center World and I’m not going to sugarcoat it, it’s been pretty prevalent, right? It’s almost nonstop and you know we’re going to talk about what makes cooling cool. You see what I did, since I’m a dad I could do those dad jokes. Thanks for everyone listening out there.

In what ways have you seen densities impact today’s data centers?

Eric Jensen:
So, I think the way that you opened up the discussion with filling the buckets is exactly right. There are opposing forces happening right now in the data center industry. It seems that while big facilities are getting bigger, there are also architectures that are trying to shrink footprints. So as a result, densities are increasing. And a lot of what we see is, that hits the news, or that is fun to talk about are the people who are doing high performance compute — 50, 70, 100 kw per rack. Those applications are out there.

But traditionally, the data center world for many years was two to four kw per rack…

Bill Kleyman:
Very true.

Eric Jensen:
And now that is increasing.  Data Aire has seen an issue of high density, and I think this is backed up by AFCOM’s State of the Data Center Report. And other reliable sources have corroborated the same thing, which is that densities are higher.

They’re higher today than they were previously and that’s posed some other challenges. We’re now we’re looking at maybe eight to 12, and people are designing for the future, which makes sense.

Nobody wants to get caught unawares three, five years down the road. So, it’s understandable to want to design for 12, 15 kw per rack. But the reality for many operators is still in that 6, 8, 10, 12 range —  and so how do you reconcile that? And that range is happening for a number of different reasons. It’s either because of the scaling of deployment over time as it gets built out or it’s because of the tenant’s type of business or the seasonality of their business.

Bill Kleyman:
You brought up a really good point. I really like some of those numbers you threw out there. So, the 2021 AFCOM State of the Data Center Report, which every AFCOM member has access to points out what you said, that the average rack density today is between 7 and 10 kilowatts per rack, and then some of those hyperscalers – 30, 40, 50, 60 kilowatts, talk about liquid cooling where they’re pushing triple digits and you start to really have an interesting conversation.

You said something really important in your last answer. Can you tell me how this impacts scale? The entire concept of doing more with less, filling the buckets but still needing the environment and ecosystem to scale?

Density, Efficiency and Economy of Scale

Eric Jensen:
Of course. So, you still have to still satisfy the density of load and it is achievable in the same kinds of traditional ways. However, it’s important to keep up with those form factors and that technology.

So, whether you’re talking about chilled water for some facilities or DX solutions, refrigerant based solutions for other types of facilities. Both can achieve scale in the traditional perimeter cooling methodologies without the need for really completely rethinking the way that you manage your data center and the load coming from those servers.

And so, whether if chilled water solutions are doing it today because those systems are getting much larger at the cooling unit level; that’s satisfied simply by higher CFM per ton.

With regards to greater airflow delivery per ton of cooling,  it’s extremely achievable without the need to dramatically alter the way you operate your data center, which is really important nowadays because every operator is in transition mode. They are transitioning their IT architecture, their power side, and also their cooling infrastructure.

It’s very doable now, as long as you are engineering-to-order. And so, whether it’s chilled water solutions, multi-fan arrays are very scalable. And you can scale down from 25 to 100 percent for the delivery, depending on whether you are trying to scale over the life of the buildout or you’re scaling back to the seasonality of the business for whomever is the IT consumer.

And if it’s DX solutions, refrigerant based solutions, that’s achievable from a good, better, best scenario. Good did the job back in the two to four kilowatt per rack days. However, nowadays, variable speed technologies are out there, and they can scale all the way from 25 to 100 percent just like chilled water.

What we’re seeing at Data Aire is that a lot of systems designed at the facility level are more dual cooling. And so, dual cooling affords the redundancy of the infrastructure. In the data center world, we like to see redundancy. But it also introduces the opportunity for economization.

Bill Kleyman:
You said a lot of really important things. Specifically, you said that we are in a transition.

I want everyone out here in the Data Center World live audience and everyone listening to us virtually to understand that we are in a transition. We genuinely are experiencing a shift in the data center space and this is a moment for everybody, I think, to kind of, you know, reflectively and respectively ask what does that transition look like for me? Am I trying to improve operations, am I trying to do efficiency, and does this transition need to be a nightmare?

From what you said, it really doesn’t. And that brings me to this next question.

We’ve talked about scalability. We’ve talked about how this difference differs across different kinds of cooling technologies and different kinds of form factors. And obviously, all these things come into play.

So, what new technologies are addressing these modern concerns and transitions?

Eric Jensen:
For what we see in the industry, those new technologies are less a matter of form function or form factor and much more at the elemental level. So, what we’re working on, I can only speak so much to…we’re working on nanotechnologies right now. And so, we’re bringing it down to the elemental level and that’s going to be able to mimic the thermal properties of water with non-water-based solutions.

Bill Kleyman:
You’re working on nanotechnology?

Eric Jensen:
Yes, sir.

Bill Kleyman:
And you just tell me this now at the end of our conversation?

Well, if you want to find out more about nanotechnology and what Data Aire is doing with that, please visit dataaire.com. Pick up the phone, give someone at Data Aire a call. I know we might not do that as often as we could. I’m definitely going to continue this conversation with you and learn more about the nanotech that you’re working on, but in the meantime thank you so much for joining us again.