Thought Cloud Podcast

In this podcast, Eric Jensen, VP/GM of Data Aire, discusses precision cooling and how it’s changing across the evolving landscape of data centers from large hyperscale facilities to the distributed edge. Learn about energy efficient cooling strategies for mission critical operations.

Specifically, you’ll learn about:

  • What is precision cooling in the mission critical industry
  • How has broadening of the operating envelopes by ASHRAE affected the industry
  • What does cooling mean for different sectors of the data center market
  • What role does energy efficiency play in the changing digital landscape
  • New strategies to cooling and energy efficiency


Brought to you by The Thought Cloud podcast, from Mission Critical.

Listen to the Podcast

 

data center server rack computer room equipment

In this blog, Eric Jensen, VP / GM of Data Aire explores how a scalable, flexible and energy efficient cooling infrastructure can meet density needs and differentiate your data center from the competition. 

Differentiating Your Data Center with Scalable Cooling Infrastructure

There are opposing forces occurring right now in the data center industry. It seems that while big facilities are getting bigger, there are also architectures that are trying to shrink footprints. So as a result, densities are increasing. Part of the conversation is shifting to density efficiency, being able to support an economy of scale but also the ability to support that in a much more sustainable manner.

Follow the Customer’s Journey

For most, especially over a multi-year transition, you must be able to accommodate wide ranges within the same facility. It’s about balance and a return out of your portfolio — striving for efficiency with technology and what will benefit the company over time. Next question…what kind of cooling is needed to meet your customer’s journey?

While new facilities may get a lot of airtime in the news, not everyone is trying to build massive data centers. Many are trying to fill the spaces they already have. Now is the time for the data center community to ask reflectively and respectively what this current transition looks like for them? Are they trying to improve operations, or manage efficiency, and how can this transition go more smoothly?

It’s understandable to want to design for 12 – 15 kw per rack, so you are prepared for the foreseeable future, but the reality for many operators is still in that 6, 8, 10, 12 range. So, the concern becomes one of reconciling immediate needs with that of the future.

Scalability and Flexibility Go Hand in Hand

It’s important to achieve elasticity to support the next generation of customer types. So as an example, the question being asked in the market today has become, how do you individualize the ability to support hot spots in an efficient manner without burning square footage. Since you are predicting a five, or even 10-year horizon in some cases, space design needs to remain flexible. Do you keep design adaptable to accommodate the possibility of air-side distribution or a flooded room — or the need to go back to chilled water applications for the chip or cabinet level cooling to support a higher density level?

When we’re discussing cooling infrastructure and the need to scale over time, it’s important to understand that we’re talking about designing for three-six times the density for which we’ve been designing for up until this point. Since computer rooms and data centers consume large amounts of power, computer room air conditioner (CRAC) manufacturers, like Data Aire, have dedicated their engineering teams to research, to create the most scalable, flexible and energy efficient cooling solutions to meet their density needs.

Uptime Institute Density Chart

It boils down to this: to meet your density outlook and stay flexible, what kind of precision cooling system can support your need to maximize server space, minimize low pressure areas, reduce costs, and reduce requirements? You should be encouraged knowing that this ask is achievable in the same kinds of traditional ways, with no need to reinvent the wheel, or in this case, your environmental control system. There are a variety of solutions to be employed, whether DX or chilled water — lots of different form factors, one ton to many tons.

So, whether you’re thinking about chilled water for some facilities or DX solutions (refrigerant based solutions) for other types of facilities, both can achieve scale in the traditional perimeter cooling methodologies without the need for completely rethinking the way you manage your data center and the load coming from the servers. Chilled water solutions may be an option because those systems are getting much larger at the cooling unit level; satisfying the density increase simply by higher CFM per ton. Multi-fan arrays are very scalable. And you can scale down from 25 to 100 percent for the delivery, depending on whether you are trying to scale over the life of the buildout or you’re scaling back to the seasonality of the business for whomever is the IT consumer.

DX solutions are achievable from a good, better, best scenario. Good did the job back in the two to four kilowatt per rack days. However, nowadays, variable speed technologies are well established, and they can scale all the way from 25 to 100 percent just like chilled water.

At Data Aire, our engineers are seeing more dual cooling systems designed at the facility level. And so, dual cooling affords the redundancy of the infrastructure. Of course, that’s important in the data center world. And it also introduces the opportunity for economization.

Density, Efficiency and Economy of Scale

The entire concept of doing more with less — filling the buckets but still needing the environment and ecosystem to scale — is playing an important role in the transition operators are facing. With regards to greater airflow delivery per ton of cooling, it’s extremely achievable without the need to dramatically alter the way you operate your data center, which is essential because every operator is in transition mode. They are transitioning their IT architecture, their power side, and their cooling infrastructure. An efficient environment adapts to IT loads. The design horizon should keep scalable and efficient cooling infrastructure in mind to help future-proof for both known and unplanned density increases.

 

This article was originally published in Data Center Frontier.

In recent years, the conversation in the data center space has been shifting to density efficiency and supporting economy of scale in a sustainable manner. Check out a discussion between Bill Kleyman, EVP of Digital Solutions at Switch and Eric Jensen, VP/GM of Data Aire, where they predominantly focus on the topic of how increasing densities are impacting data centers.

Current Trends in Precision Cooling

Bill Kleyman:
It’s fascinating, Eric, to look at what’s been happening in the data center space over the past couple of years, where the importance and the value of the data center community has only continued to increase.

And a big part of the conversation, something that we’ve seen in the AFCOM reports published on Data Center Knowledge, is that we’re not really trying to build massive, big facilities. We’re trying to fill the buckets that we already have.

And so, the conversation is shifting to density efficiency, being able to support an economy of scale but also the ability to support that in a much more sustainable manner.

So that’s where we really kind of begin the conversation. But Eric, before we jump in for those that might not be familiar, who or what is Data Aire?

Eric Jensen:
Data Aire provides environmental control solutions for the data center industry, specifically precision cooling solutions, through a variety of strategies that are employed, whether DX or chilled water — lots of different form factors, one ton to many tons.

Bill Kleyman:
Perfect. You know the conversation around density. I’ve been hopping around some of these sessions here today at Data Center World and I’m not going to sugarcoat it, it’s been pretty prevalent, right? It’s almost nonstop and you know we’re going to talk about what makes cooling cool. You see what I did, since I’m a dad I could do those dad jokes. Thanks for everyone listening out there.

In what ways have you seen densities impact today’s data centers?

Eric Jensen:
So, I think the way that you opened up the discussion with filling the buckets is exactly right. There are opposing forces happening right now in the data center industry. It seems that while big facilities are getting bigger, there are also architectures that are trying to shrink footprints. So as a result, densities are increasing. And a lot of what we see is, that hits the news, or that is fun to talk about are the people who are doing high performance compute — 50, 70, 100 kw per rack. Those applications are out there.

But traditionally, the data center world for many years was two to four kw per rack…

Bill Kleyman:
Very true.

Eric Jensen:
And now that is increasing.  Data Aire has seen an issue of high density, and I think this is backed up by AFCOM’s State of the Data Center Report. And other reliable sources have corroborated the same thing, which is that densities are higher.

They’re higher today than they were previously and that’s posed some other challenges. We’re now we’re looking at maybe eight to 12, and people are designing for the future, which makes sense.

Nobody wants to get caught unawares three, five years down the road. So, it’s understandable to want to design for 12, 15 kw per rack. But the reality for many operators is still in that 6, 8, 10, 12 range —  and so how do you reconcile that? And that range is happening for a number of different reasons. It’s either because of the scaling of deployment over time as it gets built out or it’s because of the tenant’s type of business or the seasonality of their business.

Bill Kleyman:
You brought up a really good point. I really like some of those numbers you threw out there. So, the 2021 AFCOM State of the Data Center Report, which every AFCOM member has access to points out what you said, that the average rack density today is between 7 and 10 kilowatts per rack, and then some of those hyperscalers – 30, 40, 50, 60 kilowatts, talk about liquid cooling where they’re pushing triple digits and you start to really have an interesting conversation.

You said something really important in your last answer. Can you tell me how this impacts scale? The entire concept of doing more with less, filling the buckets but still needing the environment and ecosystem to scale?

Density, Efficiency and Economy of Scale

Eric Jensen:
Of course. So, you still have to still satisfy the density of load and it is achievable in the same kinds of traditional ways. However, it’s important to keep up with those form factors and that technology.

So, whether you’re talking about chilled water for some facilities or DX solutions, refrigerant based solutions for other types of facilities. Both can achieve scale in the traditional perimeter cooling methodologies without the need for really completely rethinking the way that you manage your data center and the load coming from those servers.

And so, whether if chilled water solutions are doing it today because those systems are getting much larger at the cooling unit level; that’s satisfied simply by higher CFM per ton.

With regards to greater airflow delivery per ton of cooling,  it’s extremely achievable without the need to dramatically alter the way you operate your data center, which is really important nowadays because every operator is in transition mode. They are transitioning their IT architecture, their power side, and also their cooling infrastructure.

It’s very doable now, as long as you are engineering-to-order. And so, whether it’s chilled water solutions, multi-fan arrays are very scalable. And you can scale down from 25 to 100 percent for the delivery, depending on whether you are trying to scale over the life of the buildout or you’re scaling back to the seasonality of the business for whomever is the IT consumer.

And if it’s DX solutions, refrigerant based solutions, that’s achievable from a good, better, best scenario. Good did the job back in the two to four kilowatt per rack days. However, nowadays, variable speed technologies are out there, and they can scale all the way from 25 to 100 percent just like chilled water.

What we’re seeing at Data Aire is that a lot of systems designed at the facility level are more dual cooling. And so, dual cooling affords the redundancy of the infrastructure. In the data center world, we like to see redundancy. But it also introduces the opportunity for economization.

Bill Kleyman:
You said a lot of really important things. Specifically, you said that we are in a transition.

I want everyone out here in the Data Center World live audience and everyone listening to us virtually to understand that we are in a transition. We genuinely are experiencing a shift in the data center space and this is a moment for everybody, I think, to kind of, you know, reflectively and respectively ask what does that transition look like for me? Am I trying to improve operations, am I trying to do efficiency, and does this transition need to be a nightmare?

From what you said, it really doesn’t. And that brings me to this next question.

We’ve talked about scalability. We’ve talked about how this difference differs across different kinds of cooling technologies and different kinds of form factors. And obviously, all these things come into play.

So, what new technologies are addressing these modern concerns and transitions?

Eric Jensen:
For what we see in the industry, those new technologies are less a matter of form function or form factor and much more at the elemental level. So, what we’re working on, I can only speak so much to…we’re working on nanotechnologies right now. And so, we’re bringing it down to the elemental level and that’s going to be able to mimic the thermal properties of water with non-water-based solutions.

Bill Kleyman:
You’re working on nanotechnology?

Eric Jensen:
Yes, sir.

Bill Kleyman:
And you just tell me this now at the end of our conversation?

Well, if you want to find out more about nanotechnology and what Data Aire is doing with that, please visit dataaire.com. Pick up the phone, give someone at Data Aire a call. I know we might not do that as often as we could. I’m definitely going to continue this conversation with you and learn more about the nanotech that you’re working on, but in the meantime thank you so much for joining us again.

Data Center Cooling Hotspots

Intelligent Data Center Cooling CRAC & CRAH Controls

In an ever-changing environment like the data center, it’s most beneficial to have as many intelligent systems working together as possible. It’s amazing to think of how far technology has come from the old supercomputers the size of four filing cabinets, to the present data centers that are pushing 1,000,000 Sq. Ft.

Managing a Data Center’s Needs

Historically, managing a data center was fairly straightforward. In all the growth, we find ourselves digging into the nuisances of every little thing data center cooling, power and rack space, among hundreds of other minute aspects. This is all way too much for a Data Center Manager to be able to manage and comprehend by themselves, so implementing systems that can talk to each other has become a must.

When evaluating the cooling side of the infrastructure, there are real challenges that may make you want to consider hiring a team of engineers to monitor your space constantly.

  • Most sensible room capacities vary constantly during the first year or two years build-out.
  • This creates a moving target for CRAC/CRAH systems to hit within precise setpoints, and this can create a lot of concern by data center managers about hot spots, consistent temperatures and CRAC usage.
  • Just when you think the build-out is done, someone in the team decides to start changing hardware and you’re headed down the path of continuous server swap outs and capacity changes.

It really can turn into a game of chasing your own tail, but it doesn’t have to.

Reduce Your Stress Level

Reduce Stress with data Aire Environmental Control

Data Aire has created the Zone Control controller to address the undue stress imposed on data center managers. Zone Control allows CRAC and CRAH units to communicate with each other and deduce the most efficient way possible to cool the room to precise set-points.

No longer will you or your colleagues need to continually adjust set-points. And as previously mentioned, it’s incredibly beneficial to have as many intelligent systems working together as possible. Zone Control is a creation of open communication and dialogue between all units on the team.

CRAC & CRAH Units Should Work Together Like an Olympic Bobsledding Team

I like using a sports analogy to illustrate this idea. Just like in sports, all players on the team must know their own personal role and how all players doing their part creates the most efficient team. As I watched the 2018 winter Olympics I started thinking about the similarities between a four-man bobsled team and how CRAC/CRAH units communicate through Zone Control.

Stay with me here…the bobsled team starts off very strong to give as much of a jump out of the box as possible. Then each member starts hopping into the bobsled in a specific order. Once they assume maximum speed, all members are in the bobsled and most are on cruise control — while the leader of the team steers them to the finish line. That’s Zone Control; the leader of the team.

Personal Precision Cooling Consultant

Let’s get back to data center cooling. When the units first start ramping up – they do so to ensure enough cooling immediately. Then as our controls/logic gets the readings of the room back, these units start to drop offline in standby mode to vary down to the needed capacity of the room. They are able to talk to each other to sense where the hotter parts of the room are to ensure the units closest to the load are running. Once they have gone through this process of checks and balances to prove the right cooling capacities, they go into cruise control as Zone Control continues to steer.

This creates the most efficient and most reliable setup of cooling in each individual data center as possible. Data center managers don’t need to worry about trying to find hot spots or worry about varying loads in the room. Zone Control is an intelligent communication control that works with CRAC/CRAH data room systems to identify the needs of the space and relay that message to the team. Think of it as your personal precision cooling consultant that always has the right system setup based on real-time capacities.

Add V3 Technology to the Zone Control Team

You can go even a step further in your quest to have the most efficient environmental control system safeguarding your data center. Pair Zone Control with gForce Ultra. The Ultra was designed with V3 Technology. It is the only system on the market to include a technology trifecta of Danfoss variable speed compressors accompanied by an EEV and variable speed EC fans. gForce Ultra can vary down to the precise capacity assignments in the data room. Combine the Ultra with Zone Control and you have the smartest and most efficient CRAC system in the industry. The Zone Control even has the logic to drop all Ultra units in a room down to a 40% capacity and run as a team in cruise control, versus running half the units at 80% because of the efficiency written in the logic.

If you are worrying about your data centers hot spots and CRAC usage, give us a call and we can get you setup with the most knowledgeable cooling brains around, the Zone Control.

Do You Know Which Solution is Right for You?

One thing is certain, optimal data center design is a complex puzzle to solve. With all the options available, no one environmental control system can fit all situations. You must consider all the solutions and technology available to best manage assets and adapt your evolving data center.

There is a precision cooling system for whatever scenario best fits your current strategy or future goals. The question only remains, have you considered each of the options with your design engineer and your environmental control manufacturer? The two need to be in synch to help you maximize your return on investment.

In most instances, if you want an environmental control system that scales with your needs, provides the lowest energy costs, and provides the most reliable airflow throughout your data center, a variable-speed system is your best solution. Nevertheless, you may be curious about what other options may suit your current application.

Precise Modulated Cooling | Greatest ROI and Highest Part-Load Efficiency

Companies need to decide on their strategy and design for it. When you know you have swings in your load – seasonal, day to day or even from one corner of the data center or electrical room to the other, you should consider variable speed technology. A system with variable speed technology and accurate control design modulates to precisely match the current cooling load. This precision gives the variable speed the highest efficiency at part-load, which equates to a greater return on investment. In other words, when your data center is not running at maximum cooling load a variable speed system will use less energy and save money.

If we think of the cooling output of your environmental control system as the accelerator of a car — you can press the pedal to almost infinite positions to exactly match the speed you want to travel. You are not wasting energy overshooting your desired speed. With a well-designed control system, you also ensure a smooth response to a change in load. Further efficiency is gained by accelerating at an efficient rate for the system.

Advanced Staged Cooling | Initial Costs and Great Part-Load Efficiency

If you are looking for something that offers a portion of the benefits of a variable speed system but at a reduced first-cost, a multi-stage cooling system can be a good compromise. A multi-stage system will manage some applications well and can reduce overcooling your space — as-built today. If you need greater turndown than what a fixed speed system offers, then this is a good choice for you.

If you find this to be the right-now solution for you, you’re in good hands. The system is more advanced than a fixed speed unit; it is developed with a level of design optimization to transition in small steps. Unlike digital scroll, this accurate solution, with two-stage compressors, has a high part-load efficiency.

Think about the car accelerator example again; there are many positions to move the accelerator to with a multi-speed system. With two-stage compressors the positions are precise and repeatable, meaning you can smartly change positions to prevent overshoot, and you are more likely to have a position that matches the speed that is desired.

Although the return on investment is better with a multi-stage than a fixed-speed system; the benefits are less than with a variable speed system.

Fixed-Speed Systems | Lowest Initial Cost and Lower Part-Load Efficiency

Some consider the entry point for precision cooling based on their current budget constraints. So, if you are on a tight budget and need a lower first-cost, then a fixed-speed, single-stage precision cooling system may get the job done. However, this can be short-sighted as energy consumption and costs are higher when the data center is operating at less than the maximum designed cooling load. In our experience, this seems to happen quite frequently based on what the mechanical engineer has been asked to design vs. the actual heat load of the space.

If a fixed system is applied to the car accelerator example, you see how only applying 100% throttle or 0% throttle would prevent you from getting close to a precise speed. This is clearly not as efficient as the other examples unless you want to go at the car’s maximum speed all the time.

Ramping Up Your Data Center

The needs and goals of a data center can change over time. While the initial objective may only require getting the space in running order, customers may reassess based on changing scenarios. If your data center needs to scale, you may be challenged if you haven’t planned ahead with your design engineer for phased buildouts, or perhaps even varying IT load considerations that are seasonal or shift from day to day, or even hour to hour. Likewise, you may need to consider the difference between design and actual usage – whether it be too little or too much. Perhaps your IT team says they need two megawatts, or you are going to be running at 16 kw per rack. The cooling system designed may underserve your needs or may be overkill for the current state of usage. In addition, pushing your system to do more than it is engineered for can potentially accelerate the aging of your infrastructure.

Again, depending on your application, goals and business strategy, one of these three systems is right for you. The best course of action is to evaluate where you are today and then future-proof your data center with technology that can grow with you if necessary.

Nothing says it better than an infographic.

This article was originally published in Data Center Frontier.

 

 

What is the current state of data center rack density, and what lies ahead for cooling as more users put artificial intelligence to work in their applications?

For years, the threat of high rack densities loomed yet each passing year saw the same 2-4kW per rack average.  That’s now nudging up.  While specific sectors like Federal Agencies, higher education, and enterprise R&D are certainly into high performance computing with 20, 80, or even 100kW per rack, the reality today remains one of high[ER] density in the realm of 8-12kW per rack (see Uptime Institute’s global survey 2020).  Cooling higher densities doesn’t mean over building at risk of stranded capacity for parts of the year.  The answer is load matching via software that can respond accordingly and the infrastructure hardware to support it.

Many industries are experiencing difficulty finding enough skilled workers. What’s the outlook for data center staffing, and what are the key strategies for finding talented staff? 

Cleveland Community College Case StudyData Center staffing is as challenged, if not perhaps more so, than many industries.  As the world becomes increasingly complex – perhaps more accurately, specialized – specific skill sets become more precious.  This challenge hits datacom at all levels – from design and construction to operations and maintenance.  Amazon can pop up a distribution center in rural locales and train an unskilled workforce to perform its warehousing activities.  A cloud data center going up in remote locales needs far fewer workers, but the total available skills versus those needed per capita are much more scarce.  The good news is there are organizations working hard to fix this.  Cleveland College in North Carolina, for example, developed a first-of-its-kind curriculum for Mission Critical Operations in conjunction with 7×24 Exchange.  7×24 Exchange with its Women in Mission Critical, is also leading the way in bringing diversity to the datacom sector to enrich as well as increase the pool of candidates.  Ten years ago, the average high school or college grad didn’t know what a data center was.  Through industry, and now educators’ efforts, that’s beginning to shift.

How have enterprise data center needs evolved during the pandemic? What do you expect for 2021?

The pandemic was an immediate stress test on IT – on the hardware and the software, both distributed (ie: users) and the data center.  Many enterprises were, understandably, caught off guard.  One of the most basic impacts was trying to make up for users’ connectivity challenges as much as possible at the applications and at the data center.  Anything that could be done at the architecture to improve operational efficiency was needed to improve the UX.  One interesting thing to watch over the next 1-2 years might be how the enterprise architecture may change in response to a more distributed workforce long-term as many larger organizations are choosing not to return to the office.  That’s leading many to relocate because an office commute is no longer a consideration.  Does the large enterprise’s need start to look more like average consumer consuming or computing cloud content?  More immediately, enterprises have quickly sought to refresh their infrastructure or just shore it up with a bit more failsafe – the old, ‘we can’t control the universe but we can control our response to it.’

Edge computing continues to be a hot topic. How is this sector evolving, and what use cases and applications are gaining the most traction with customers? 

The edge moves and changes shape.  Maybe always will.  High tech manufacturing and healthcare are two places the edge is evolving.  High tech manufacturing and warehousing is adopting more autonomous robotic operation needing to be updated and to learn in situ.  As healthcare becomes more digitally-oriented, whether because of the connected devices in a modern healthcare setting or the adoption of telehealth, firmware and applications need to be reliably robust, and secure in the healthcare provider’s hands.

How is density, efficiency & economy of scale entering the conversation?

 

Few data centers live in a world of ‘high’ density, a number that is a moving target, but many are moving to high[er] density environments. Owners of higher density data centers often aren’t aware of how many variables factor into cooling their equipment. The result is that they spend too much on shotgun solutions that waste capacity when they would be better served by taking a rifle shot approach. This means understanding the heat dispersion characteristics of each piece of equipment and optimizing floor plans and the placement of cooling solutions for maximum efficiency.

So, how do you invest in today and plan for tomorrow? By engaging early in the data center design process with a cooling provider that has a broad line of cooling solutions, owners can maximize server space, minimize low pressure areas, reduce costs, save on floor space and boost overall efficiency. And by choosing a provider that can scale with their data center, they can ensure that their needs will be met long into the future.

Density is Growing: Low to Medium to High[er] and Highest

Data centers are growing increasingly dense, creating unprecedented cooling challenges. That trend will undoubtedly continue. The Uptime Institute’s 2020 Data Center survey found that the average server density per rack has more than tripled from 2.4 kW to 8.4 kW over the last nine years. While still within the safe zone of most conventional cooling equipment, the trend is clearly toward equipment running hotter, a trend accelerated by the growing use of GPUs and multi-core processors. Some higher-density racks now draw as much as 16 kW per rack, and the highest-performance computing is demanding typically up 40-50 kW per rack.

High[er] Density Requires Dedicated Cooling Strategies

For the sake of discussion, let’s focus on the data centers that are, or may be, in the 8.4-16 kW range in the near future. This higher density demands a specialized cooling strategy, yet many data center operators waste money by provisioning equipment to cool the entire room rather than the equipment inside. In fact, “Over-provisioning of power/cooling is probably more common issue than under provisioning due to rising rack densities,” the Uptime survey asserted.

No two data centers are alike and there is no one-size-fits-all cooling solution. Thermal controls should be customized to the server configuration and installed in concert with the rest of the facility, or at least six months before the go-live date. Equipment in the higher density range of 8-16 kw can present unique challenges to precision cooling configurations. The performance of the servers themselves can vary from rack to rack, within a rack and even with the time of day or year, causing hotspots to emerge.

Higher-density equipment creates variable hot and cool spots that need to be managed differently. A rack that is outfitted with multiple graphic processing units for machine learning tasks generates considerably more heat than one that processes database transactions. Excessive cabling can restrict the flow of exhaust air. Unsealed floor openings can cause leakages that prevent conditioned air from reaching the top of the rack. Unused vertical space can cause hot exhaust to feed back into the equipment’s intake ducts, causing heat to build up and threatening equipment integrity.

For all these reasons, higher-density equipment is not well-served by a standard computer room air conditioning (CRAC) unit. Variable speed direct expansion CRAC equipment, like gForce Ultra scales up and down gracefully to meet demand. This not only saves money but minimizes power surges that can cause downtime. Continuous monitoring should be put in place using sensors to detect heat buildup in one spot that may threaten nearby equipment. Alarms should be set to flag critical events without triggering unnecessary firefighting. Cooling should also be integrated into the building-wide environmental monitoring systems.

Working Together: Density, Efficiency and Scalability

 

A Better Approach to Specifying Data Center Equipment

The best approach to specifying data center equipment is to build cooling plans into the design earlier.  Alternating “hot” and “cold” aisles could be created with vented floor tiles in the cold aisles and servers arranged to exhaust all hot air into an unvented hot aisle. The choice of front discharge, up flow and down flow ventilation can prevent heat from being inadvertently circulated back into the rack. Power distribution also needs to be planned carefully and backup power provisioned to avoid loss of cooling.

Thinking through cooling needs early in the data center design stage for higher density data centers avoids costly and disruptive retrofits down the road. The trajectory of power density is clear, so cooling design should consider not only today’s needs but those five and 10 years from now. Modular, and variable capacity systems can scale and grow as needed.

The earlier data center owners involve their cooling providers in their design decisions the more they’ll save from engineered-to-order solutions and the less risk they’ll have of unpleasant surprises down the road.

Whitepaper to learn about the Department of Energy's (DOE) current standards for the efficiency ratings of a CRAC

If you’ve ever done anything even remotely related to HVAC, you’ve probably encountered ASHRAE at some point. The American Society of Heating, Refrigerating and Air-Conditioning Engineers is a widely influential organization that sets all sorts of industry guidelines. Though you don’t technically have to follow ASHRAE standards, doing so can make your systems a lot more effective and energy efficient. This guide will cover all the basics so that you can make sure your data centers get appropriate cooling.

What Are the ASHRAE Equipment Classes?

One of the key parts of ASHRAE Data Center Cooling Standards is the equipment classes. All basic IT equipment is divided into various classes based on what the equipment is and how it should run. If you’ve encountered ASHRAE standards before, you may already know a little about these classes. However, they have been updated recently, so it’s a good idea to go over them again, just in case. These classes are defined in ASHRAE TC 9.9.

  • A1: This class contains enterprise servers and other storage products. A1 equipment requires the strictest level of environmental control.
  • A2: A2 equipment is general volume servers, storage products, personal computers, and workstations.
  • A3: A3 is fairly similar to the A2 class, containing a lot of personal computers, private workstations, and volume servers. However, A3 equipment can withstand a far broader range of temperatures.
  • A4: This has the broadest range of allowable temperatures. It applies to certain types of IT equipment like personal computers, storage products, workstations, and volume servers.[1]

Recommended Temperature and Humidity for ASHRAE Classes

The primary purpose of ASHRAE classes is to figure out what operating conditions equipment needs. Once you use ASHRAE resources to find the right class for a specific product, you just need to ensure the server room climate is meeting these needs.

First of all, the server room’s overall temperature needs to meet ASHRAE standards for its class. ASHRAE standards always recommend that equipment be kept between 18 to 27 degrees Celsius when possible. However, each class has a much broader allowable operating range.[1] These guidelines are:

  • A1: Operating temperatures should be between 15°C (59°F) to 32°C (89.6°F).
  • A2: Operating temperatures should be between 10°C (50°F) to 35°C (95°F).
  • A3: Operating temperatures should be between 5°C (41°F) to 40°C (104°F).
  • A4: Operating temperatures should be between 5°C (41°F) to 45°C (113°F).[1]

You also need to pay close attention to humidity. Humidity is a little more complex to measure than temperature. Technicians will need to look at both dew point, which is the temperature when the air is saturated, and relative humidity, which is the percent the air is saturated at any given temperature.[2] Humidity standards for ASHRAE classes are as follows:

  • A1: Maximum dew point should be no more than 17°C (62.6°F). Relative humidity should be between 20% and 80%.
  • A2: Maximum dew point should be no more than 21°C (69.8°F). Relative humidity should be between 20% and 80%.
  • A3: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 85%.
  • A4: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 90%.[1]

Tips for Designing Rooms to Meet ASHRAE Data Center Cooling Standards

As you can see, ASHRAE guidelines are fairly broad. Just about any quality precision cooling system can easily achieve ASHRAE standards in a data center. However, a good design should do more than just consistently hit a temperature range. Planning the right design carefully can help reduce energy usage and make it easier to work in the data center. There are all sorts of factors you will need to consider.

Since most companies also want to save energy, it can be tempting to design a cooling system that operates toward the maximum allowable ASHRAE guidelines. However, higher operating temperatures can end up shortening equipment’s life span and causing inefficiently operated technology to use more power.[3] Carefully analyzing these costs can help companies find the right temperature range for their system.

Once you have a desired temperature set, it’s time to start looking at some cooling products. CRAC and CRAH units are always a reliable and effective option for data centers of all sizes. Another increasingly popular approach is a fluid cooler system that uses fluid to disperse heat away from high temperature systems. Many companies in cooler climates are also switching to environmental economizer cooling systems that pull in cold air from the outdoors.[3]

Much of data center design focuses on arranging HVAC products in a way that provides extra efficiency. Setting up hot and cold aisles can be a simple and beneficial technique. This involves placing server aisles back-to-back so the hot air that vents out the back flows in a single stream to the exit vent. You may also want to consider a raised floor configuration, where cold air enters through a floor cooling unit. This employs heat’s tendency to rise, so cooling air is pulled throughout the room.[4] By carefully designing airflow and product placement, you can achieve ASHRAE standards while improving efficiency.

Data Aire Is Here to Help

If you have any questions about following ASHRAE Data Center Cooling Standards, turn to the experts! At Data Aire, all of our technicians are fully trained in the latest ASHRAE standards. We are happy to explain the standards to you in depth and help you meet these standards for your data room. Our precision cooling solutions provide both advanced environmental control and efficient energy usage.

 

 

References:

[1] https://www.chiltrix.com/documents/HP-ASHRAE.pdf
[2] https://www.chicagotribune.com/weather/ct-wea-0907-asktom-20160906-column.html
[3] https://www.ibm.com/downloads/cas/1Q94RPGE
[4] https://www.simscale.com/blog/2018/02/data-center-cooling-ashrae-90-4/

data center cooling

It’s vital to keep your data center environment optimal to promote peak performance.

Data center cooling is a $20 billion industry. Cooling is the highest operational cost aside from the ITE load itself. It’s also the most important maintenance feature.

There are a few data center cooling best practices that can keep your data center humming along smoothly. These practices can help you to improve the efficiency of your data center cooling system. They can also help you to reduce costs.

It’s important to execute any changes to your data center cooling system carefully. For this reason, it’s vital to work with an experienced engineer before making any changes in a live environment.

To learn more about data center cooling best practices, continue reading.

The State of Data Center Environmental Control

Today, data center environmental control is one of the most widely discussed topics in the IT space. Also, there’s a growing discrepancy between older data centers and new hyperscale facilities. Despite age or scale, however, power utilization and efficiency are critical in any data center.

It’s well-known that data centers are one of the largest consumers of electricity around the world. Today, data centers used up to 1% to 1.5% of all the world’s energy. What’s more, energy usage will only increase as more innovations emerge. These innovations include:

  • Artificial intelligence
  • Cloud services
  • Edge computing
  • IoT

Furthermore, these items represent only a handful of emerging tech.

Over time, the efficiency of technology improves. However, those gains are offset by the never-ending demand for increased computing and storage space. Firms need data centers to store information that enables them to satisfy consumer and business demands.

Accordingly, data center power density needs will increase every year. Currently, the average rack power density is about 7 kW. Some power racks have a density of as much as 15 kW to 16 kW per rack. However, high-performance computing is demanding typically up 40-50 kw per rack.

These numbers are driving data centers to source the most energy efficient cooling systems available.

What Is the Recommended Temperature for a Data Center?

The American Center for Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) offers an answer to this question. ASHRAE suggests server inlet temperatures between 64.4° and 80.6°F. Furthermore, the society recommends a relative humidity between 20% and 80%.

The Uptime Institute, however, has a different opinion.

The Institute recommends an upper temp limit of 77°F.

However, many data centers run much cooler, especially older ones. IT workers prefer to err on the side of caution to avoid overheating equipment.

Data Center Cooling Calculations

Data Center IT It’s important to understand current conditions before making your data center cooling calculations. For example, you’ll need to assess the current IT load in kilowatts. You’ll also need to measure the intake temperature across your data center. This measurement should include any hotspots.

At a minimum, you want to record the temperature at mid-height. You’ll also want to record the temperatures at the end of each row of racks. Also, you should take the temperature at the top of the rack in the center of each row.

As you take measurements, record the location, temp, date, and time. You’ll need this information later for comparison.

Now, measure the power draw of your cooling unit in kilowatts. Typically, you’ll find a dedicated panel for this measurement on most units. You could also use a separate monitoring system to take this measurement.

You’ll also need to measure the room’s sensible cooling load. You’ll need to measure the airflow volume for each cooling unit for this task. Also, you’ll need to record the supply and return temperatures for each active unit.

Getting to the Math

You can determine a reasonable capacity for each operating unit and kilowatts using the following formula:

Q sensible (kW) = 0.316*CFM*(Return Temperature[°F] – Supply Temperature[°F])/1000
[Q sensible (kW) = 1.21*CMH*(Return Temperature[°C] – Supply Temperature[°C])/3600]

Now, you can compare the cooling load to the IT load to create a point of reference.

Next, you’ll make use of the airflow and return air temperature measurements. You’ll need to contact your equipment vendor for the sensible capacity of each unit in kilowatts. Now, total the sensible capacity for the units that are now in operation. This is about the most simplistic calculation that you’ll find. If you prefer, however, you can find much more complex methods for calculation online.

Next, you’ll need the overall room operating cooling sensible capacity and kilowatts from your measurements. You’ll divide the former by the latter to find the sensible operating cooling load. Now, you have a ratio to use as a benchmark to evaluate subsequent improvements.

Still, it’s important to consult with IT engineers. They can help you determine the maximal allowable intake temperature that will not damage your IT equipment in a new environment. Using your collected data, you can create a work plan to establish your goals. You can also use the information to determine metrics that you’ll monitor to ensure that the cooling environment functions properly.

You’ll also want to develop a back-out plan just in case you have any problems along the way. Finally, you want to pinpoint the performance metrics that you’ll track. For instance, you might track inlet temperatures. Conversely, you may monitor power consumption or other metrics.

Data Center Cooling Best Practices

It can prove challenging to figure out where to start with upgrades for data center environmental control. A few data center cooling best practices can help in this regard. There are many variables that can affect the airflow in your data center. These variables may include the types of data racks. They can even include the cable openings. By following airflow best management practices, however, you can avoid equipment failures. The following strategies can help boost your data center airflow management for improved efficiency:

  • Manage the cooling infrastructurecloud data center
  • Block open spaces to prevent air bypass
  • Manage data center raised floors

What follows are details for these strategies.

Best Practice 1: Manage the Cooling Infrastructure

Data centers use a lot of electricity. For this reason, they need an intense cooling infrastructure to keep everything working correctly. To put this in perspective, according to the US Department of Commerce, the power densities of these facilities, measured in kilowatts (kW) per square foot (ft2) of building space, can be nearly 40 times higher than the power densities of commercial office buildings.

If you need to improve the airflow in your data center, you may want to consider changing the cooling infrastructure. For example, you may reduce the number of operating cooling units to meet the needed capacity. Alternatively, you might raise the temperature without going over your server intake air temperature maximum.

Best Practice 2: Block Open Spaces

It’s vital to close all open spaces under your racks. It’s also important to close open spaces in the vertical planes of your IT equipment intakes.

You must also close any open spaces in your server racks and rows. Spaces here can cause your airflow balance to get skewed.

Also, you’ll want to seal any spaces underneath and on the sides of cabinets as well as between mounting rails. You’ll also want to install rack grommets and blanking panels. In this way, you’ll ensure that there aren’t any unwanted gaps between your cabinets.

Best Practice 3: Manage Data Center Raised Floors

Also, you’ll want to monitor the open area of the horizontal plane of your raised floor. Openings in your raised floor can bypass airflow. This circumstance can also skew the airflow balance in your data center.

You’ll want to manage the perforated tile placement on your raised floor to avoid this problem. You must also seal cable openings with brushes and grommets. Finally, you’ll need to inspect the perimeter walls underneath the raised floor for partition penetrations or gaps.

Choosing a Data Center Cooling Design

There are a few emerging data center cooling methods in the computer room air conditioning (CRAC) space, such as data center water cooling. For example, you might want to consider advanced climate controls to manage airflow.

State-of-the-art data centers incorporate new ways to optimize the cooling infrastructure for greater efficiency. Now, you can enjoy precision data center environmental control with several technologies. These technologies include:

Usually, the data center cooling methods that you choose are driven by site conditions. An experienced consultant can help you to select the right data center cooling design.

Your Partner in Data Center Air Control

Now you know more about data center cooling best practices. What you need now is a well-qualified expert in data center cooling. Data Aire has more than 50 years of experience. We’ve helped firms find innovative answers for  emerging demands.

At Data Aire, we’re a solutions-driven organization with a passion for creativity. Furthermore, we believe in working closely with our clients during the consultative process. We can give you access to extensive expertise and control logic. By partnering with us, you’ll enjoy world-class manufacturing capability recognized by leading international quality certifications.

Contact Data Aire today at (800) 347-2473 or connect with us online to learn more about our consultative approach to helping you choose the most appropriate environmental control system your data center.

Data center numbers are growing – but is your efficiency falling?

The latest AFCOM State of the Data Center report, post-COVID, indicates strong growth in the data center space. This includes cloud, edge, and even colocation space. However, it also noted that many are looking even deeper into efficiency as demand for data, space, and power continues to grow. This special FastChat looks at the very latest data center trends and outlines some of the top data center efficiency designs.

Specifically, you’ll learn about:

  • The latest trends just released from the AFCOM State of the Data Center Report surrounding data center growth and efficiency
  • With speed and scale, come challenges with compromises and cooling
  • Top 3 List: Know what to ask your vendor
  • Top 3 List: Understand which technologies are helping evolve the efficiency of our industry

Learn more about economization solutions from Data Aire. And discover how we can help you scale your data center at your desired pace.

Read our latest guide, which highlights proven ways to conserve energy in your data center.