Data Aire Service Technicians

Businesses increasingly rely on data centers to store and process reams of complex information. A typical organization has more than a dozen data centers, which can range from a server room in the office to large, remote sites.

Keeping all that processing power at the optimal temperature — below 80 degrees Fahrenheit — requires a lot of resources. Data centers account for more than 1% of world energy usage, which is more than the energy needs of some nations. These facilities can also use a considerable amount of water, with about 1.8 liters used for every 1 kilowatt-hour the center uses.

Bisnow spoke with Eric Jensen, vice president and general manager of cooling solutions company Data Aire, about how data center operators can reduce both their operating costs and environmental footprints. A solution for many facilities, he said, is a hybrid approach to cooling that combines refrigerant-based direct expansion air cooling with water-based cooling.

Bisnow: What is ‘precision cooling’ and why is it important in data centers?

Jensen: Precision cooling is air conditioning that is designed specifically for IT equipment in data centers and computer rooms. With precision cooling, you’re using only the energy that you need.

In the case of colocation providers, which are increasingly popular, precision cooling is even more critical because they’re very much trying to manage their allocation of power, especially in a multi-tenant type of operation.

Bisnow: Server rack density continues to grow to keep up with demand. What impact does that have?

Jensen: As the densities grow, you end up with higher airflow requirements per kilowatt of cooling. Those higher air flows require more control of various fans, which are typically configured in an array.

Now, you’ve got airflow requirements across an array of fans as opposed to one or two fans. Efficient control of that equipment becomes a critical piece of designing and operating your data center.

Bisnow: What should be a data center’s priorities in selecting an environmental control system? 

Jensen: Capacity and reliability are givens, but beyond that, it’s efficient power utilization and scalability.

Scalability refers to a couple of different things. One aspect of scalability is very tactical, and it is the ability to ramp up and down as the load demand changes, either daily or seasonally, or potentially even from one point in a data center to another.

A bit more of a strategic consideration is scalability over time. Depending on the type of data center that’s being built or operated, the time horizon might range from as low as five years and up to 20 or 30 years. Scalability over a long period of time requires the ability to add additional equipment to keep pace with growth.

Bisnow: What impact does all of this have on a data center’s carbon footprint and water usage?

Jensen: A higher-efficiency cooling process naturally means less power consumption. Depending on the kinds of technologies that are employed, an efficient process could also lead to reduced refrigerant management, which is highly important for the planet.

The industry is migrating to lower global warming-potential types of refrigerants, but that comes with a tradeoff in the form of high operating pressures and potentially higher flammability. Like with everything, it’s always a balancing act, but refrigerant management is key.

Whether you’re talking about an air-cooled chiller or a chilled water type of a facility, refrigerant management becomes important because the more total refrigerant that you have on-site, the higher your carbon footprint is going to be.

Reducing your total refrigerant usage on-site is one piece of operating more sustainably. The other piece of it is to reduce some of your risk associated with any leakage of refrigerant through pipes. Leakage is always a risk just through routine maintenance.

Data center environmental management also concerns water consumption. The strategies you choose for your cooling infrastructure come into play quite a bit in how you will achieve that because energy economization is a way to reduce your carbon footprint or your water utilization.

There are lots of different economization technologies out there, but essentially there’s air-side and water-side technologies. On the air side, you can introduce outside air on a cold, dry day, mix it with your recirculated indoor air and reduce your carbon footprint through reduced energy consumption.

Another method is water-side economization. This introduces more infrastructure but water-side economization can end up being very, very efficient.

Bisnow: Why do you think a hybrid solution that combines air and water cooling is the best solution?

Jensen: It can provide the best of both worlds in terms of reliability, scalability and efficiency. Air-side economization is not necessarily easy to achieve because it’s hard to predict the outside environment and how many hours of the year you might be able to have suitable air quality. However, water-side economization is easier to achieve.

Bisnow: Can you give an example of a successful application of hybrid cooling?

Jensen: A good example is One Wilshire, a 50-year-old office building in Los Angeles that was converted for use as a data center. The client did the research to understand its water and energy consumption needs.

Thanks to the reduction in power consumption, which was realized from the higher-efficiency evaporative cooling strategy that they chose, they found that the hybrid solution would work for them.

One Wilshire also is a prime example of adaptive reuse of commercial real estate, and it was made possible in part by the use of a hybrid cooling system.

 

This article was originally produced in collaboration between Studio B and Data Aire. and published on Bisnow.

Studio B is Bisnow’s in-house content and design studio. To learn more about how Studio B can help your team, reach out to studio@bisnow.com

Contact John Krukowski at john.krukowski@bisnow.com

By Moises Levy, PhD  January 15, 2021

There is a better way to assess data center behavior. Novel multidimensional metrics have been incorporated in data center standards and best practices.

Learning Objectives

  • Understand the importance of multidimensional data center metrics, comprising performance and risk.
  • Recognize how multidimensional metrics enable a holistic understanding of data centers.
  • Identify the challenges to measure data center performance and risks.

Data centers comprise information technology equipment and supporting infrastructure such as power, cooling, telecommunications, fire systems, security and automation. A data center’s main task is to process and store information securely and to provide users uninterrupted access to it. These mission critical facilities are very dynamic, equipment can be upgraded frequently, new equipment can be added, obsolete equipment may be removed and old and new systems may be in use simultaneously.

Our increasing reliance on data centers has created an urgent need to adequately monitor these energy intensive facilities. Data centers are responsible for about 1% of total electricity consumption. Environmental impact of data centers varies depending on the energy sources used and the total heat generated. The information technology sector has contributed to a reduction in carbon emissions in other sectors.

In the United States, it is estimated that for every kilowatt-hour consumed by the IT sector, 10 kilowatt-hours are saved in other sectors due to the increase in economic productivity and energy efficiency. The growing ubiquity of IT driven technologies has revolutionized and optimized the relationship between efficiency and productivity and energy consumption across every sector of the economy.

Data Center Metrics

Metrics are measures of quantitative assessment that communicate important information and allow comparisons or tracking of performance, progress or other parameters over time. Through different metrics data centers can be evaluated in comparison to goals established or to similar data centers. Variations or inconsistencies in measurements can produce a false result for a metric, which is why it is very important to standardize them. Most of the current metrics, standards and legislation on data centers are mainly focused toward energy efficiency.

Existing metrics fail to incorporate important factors for a holistic understanding of the data center behavior, including different aspects of performance and the risks that may impact it. This being the case, comparisons between data center scores with the purpose of evaluating areas of improvement is not an easy task.

Furthermore, there currently is no metric that examines performance and risk simultaneously. A data center may have high performance indicators, with a high risk of failure. Having access to risk indicators may work as an early warning system so that mitigation strategies are planned and actions are undertaken.

BICSI 009-2019: Data Center Operations and Maintenance Best Practices incorporates in Section 10.5: Metrics and Measurement the concept of multidimensional data center metrics, comprising performance and risk. Performance is assessed across four different sub-dimensions: productivity, efficiency, sustainability and operations. Risks associated with each of those sub-dimensions and external risks should be contemplated.

In addition, ANSI/BICSI 002-2019Data Center Design and Implementation Best Practices indicates in Section 6.7: Design by Performance that data center performance can be examined by various factors, including the previously mentioned sub-dimensions. It also indicates that many existing metrics have been developed to measure these areas of performance and the need to address the risks that may affect it.

Figure 1 illustrates the concept of next generation of multidimensional data center metrics. A correlation between the different elements can exist; for the sake of the explanation, it is assumed that there is no correlation between different performance sub-dimensions.

Figure 1: Multidimensional data center metric comprising performance (productivity, efficiency, operations and sustainability) and risk are shown. Courtesy: DCMetrix

 

 

Given some premises, a data center may be ideal at a certain point in time, but when conditions change that same data center may not be optimal. Following is a more detailed explanation of the metric.

Data Center Performance

Four sub-dimensions are used to assess data center performance: productivity, efficiency, sustainability and operations.

Productivity: Productivity gives a sense of work accomplished or “useful work,” which can be understood as the sum of weighted tasks carried out in a period of time. Examples of tasks are transactions, amount of information processed or units of production. The weight of each task must be allocated depending on its importance.

A normalization factor should be considered to allow the addition of different tasks. Key productivity indicators include the ratio of useful work accomplished to energy consumption, physical space and costs. Costs usually include capital expenditures and operating expenses.

Downtime and quality of service must also be considered, as they affect productivity. A measurement within this category may calculate the impact of downtime on productivity, measured as the useful work that was not completed as well as other indirect tangible and intangible costs due to this failure. Quality of service measurements can include variables such as maximum or average waiting time, latency, scheduling and availability of resources.

Efficiency: Efficiency has been given substantial attention due to the high energy consumption of data centers. Many metrics have been proposed to measure efficiency. The most widely used efficiency metric is power usage effectiveness to assess the site infrastructure efficiency, through the ratio of the energy consumed by ITE to total energy used by the data center.

There are additional examples of key efficiency indicators. ITE usage metrics (e.g., power, processing capacity, central processing unit, memory, storage, communication) promote efficient operation of IT resources. Physical space usage metrics promote efficient planning of physical space. Other key indicators can gauge how energy efficient ITE, power systems and environmental systems are.

Sustainability: Sustainability can be defined as development that addresses current needs without jeopardizing future generations’ capabilities to satisfy their own needs. Nowadays sustainability initiatives are gaining substantial attention. Companies such as Google, Microsoft, Facebook, Amazon and Apple are undertaking significant efforts to reduce greenhouse gas emissions, to become carbon neutral or negative and to at least match electricity consumption with renewable energy.

Examples of key sustainability indicators include the ratio of green energy sources to total energy, the carbon footprint and the water usage. In addition, an evaluation may be conducted on how environmentally friendly the related processes, materials and components are.

Operations: Key operation indicators gauge how well-managed data centers are. This incorporates an analysis of the maturity level of operations and processes, including site infrastructure, IT equipment, maintenance, human resources training and security systems. Audits of systems and processes are necessary to collect the required data. This data should include factors such as documentation, planning, human resources activities and training, status and quality of maintenance, service level agreement and security. 

Risks for Data Centers

Data center performance cannot be completely evaluated if the risks that may impact it are not considered. End-to-end resource optimization must involve risk. Risk can be defined as uncertainties or potential events that, if materialized, could impact the performance of the data center.

For our purposes the risk level is defined as the product of the probability of occurrence (Po) of an event times its impact (I), normalized to the desired scale.

Risk = Po x I

The user may implement actions to achieve the optimum performance and later adjust that performance to a tolerable level of risk, which may again deviate the key indicators from their optimum performance. The acceptable level of risk should consider the risk appetite.

Risk associated with the sub-dimensions of performance, as well as external risk, which usually is independent of performance, are also measured through the use of metrics. A common strategy to reduce probability of failure is redundancy of resources, but it may affect performance and costs.

Risks Associated with Performance

Productivity risk: It should consider present and past data for parameters that may affect the useful work such as downtime and quality of service. The impact may consider the useful work that was not completed properly, as well as related tangible and intangible costs.

Efficiency risk: If resource usage is close to or at capacity, it means that the risk of future projections not being met is high. The ratio of processing, IT resources, physical space and power usage, to their respective total capacities should be factored in.

Sustainability risk: Analysis of historic behavior of the different green energy sources, the composition of each energy source and its probability of failure should be assessed.

Operations risk: Analyses of historical data are needed to estimate the probability of failure due to improper operation in the areas identified and its impact.

External Risks: Site risk

A data center site risk metric is a component of the multidimensional data center metric. The methodology of the site risk metric identifies potential threats and vulnerabilities (risk identification), which are divided into four main categories: utilities; natural hazards and environment; transportation and adjacent properties; and regulations, incentives and others. The allocation of weights among each category is based on the significance of each of these factors on the data center operation.

The methodology quantifies the probability of occurrence of each event and estimates potential impact (risk analysis). It calculates the total risk level associated with the data center location by multiplying the probability of occurrence by the impact of each threat. That product is then multiplied by the respective assigned weight and normalized.

Through this analysis the different threats can be prioritized. Understanding risk concentration by category facilitates analyzing mitigation strategies (risk evaluation). This methodology provides solid guidance for risk assessment of a data center site.

Visualization Tool

To enable cross-comparability, all the different indicators should be normalized. For key performance indicators, a higher value implies a more positive outcome, so minimum and maximum values correspond to the worst and best possible expected outcomes. Conversely, for key risk indicators, a higher value implies a higher level of risk, therefore a less desirable scenario.

Spider graphs allow visual comparisons and trade-off analysis between different scenarios. This is helpful when simulating or forecasting different strategies or reporting to stakeholders. Figure 2 shows an example of data center comparison. Edges of diamonds show measurement of the four dimensions of key performance indicators: productivity (P), efficiency (E), sustainability (S) and operations (O). The larger the diamond, the better the performance. Risks can be analyzed in similar spider graphs.

Figure 2: This provides a data center key performance indicators comparison of the same data center at two different points in time or two different data centers, using spider graphs. Courtesy: DCMetrix

Data Center Automation

Data centers are evolving toward digital and intelligent infrastructures. With the ubiquitous presence of sensing devices, IoT and new technologies, it is easier to automate the process to collect, in real time, different parameters to assess metrics. We must understand the relevant data to be gathered rather than simply collecting more data.

Such massive amounts of data can be used for predictive analytics, to visualize trends and behavior across time. New tools including artificial intelligence/machine learning contribute to improve the prediction process.

Lastly, we can use it for prescriptive analytics, to generate prioritized actionable recommendations. We should not forget that data center end-to-end resource management is an iterative process.

The multidimensional data center metric incorporated in data center best practices, allows a comprehensive assessment of the data center, combining performance (productivity, efficiency, sustainability and operations) and risks (associated with performance and site risk). It allows to rank data centers, to make comparisons between different data centers and to measure and compare before and after as well as different scenarios of the same data center. It is flexible, thus having the possibility to select and update performance and risk measurements.

Actions undertaken may impact the metric results in real time. In such cases, when variables are remeasured, the result of the metric should change. That way, implementation of new strategies may lead to the modification of the overall data center performance and risk to a more desirable score.

 

Moises Levy, Ph.D., is CEO of DCMetrix. Levy has dedicated more than 15 years to developing and deploying data center projects.

Thought Cloud Podcast

In this podcast, Eric Jensen, VP/GM of Data Aire, discusses precision cooling and how it’s changing across the evolving landscape of data centers from large hyperscale facilities to the distributed edge. Learn about energy efficient cooling strategies for mission critical operations.

Specifically, you’ll learn about:

  • What is precision cooling in the mission critical industry
  • How has broadening of the operating envelopes by ASHRAE affected the industry
  • What does cooling mean for different sectors of the data center market
  • What role does energy efficiency play in the changing digital landscape
  • New strategies to cooling and energy efficiency


Brought to you by The Thought Cloud podcast, from Mission Critical.

Listen to the Podcast

 

data center server rack computer room equipment

In this blog, Eric Jensen, VP / GM of Data Aire explores how a scalable, flexible and energy efficient cooling infrastructure can meet density needs and differentiate your data center from the competition. 

Differentiating Your Data Center with Scalable Cooling Infrastructure

There are opposing forces occurring right now in the data center industry. It seems that while big facilities are getting bigger, there are also architectures that are trying to shrink footprints. So as a result, densities are increasing. Part of the conversation is shifting to density efficiency, being able to support an economy of scale but also the ability to support that in a much more sustainable manner.

Follow the Customer’s Journey

For most, especially over a multi-year transition, you must be able to accommodate wide ranges within the same facility. It’s about balance and a return out of your portfolio — striving for efficiency with technology and what will benefit the company over time. Next question…what kind of cooling is needed to meet your customer’s journey?

While new facilities may get a lot of airtime in the news, not everyone is trying to build massive data centers. Many are trying to fill the spaces they already have. Now is the time for the data center community to ask reflectively and respectively what this current transition looks like for them? Are they trying to improve operations, or manage efficiency, and how can this transition go more smoothly?

It’s understandable to want to design for 12 – 15 kw per rack, so you are prepared for the foreseeable future, but the reality for many operators is still in that 6, 8, 10, 12 range. So, the concern becomes one of reconciling immediate needs with that of the future.

Scalability and Flexibility Go Hand in Hand

It’s important to achieve elasticity to support the next generation of customer types. So as an example, the question being asked in the market today has become, how do you individualize the ability to support hot spots in an efficient manner without burning square footage. Since you are predicting a five, or even 10-year horizon in some cases, space design needs to remain flexible. Do you keep design adaptable to accommodate the possibility of air-side distribution or a flooded room — or the need to go back to chilled water applications for the chip or cabinet level cooling to support a higher density level?

When we’re discussing cooling infrastructure and the need to scale over time, it’s important to understand that we’re talking about designing for three-six times the density for which we’ve been designing for up until this point. Since computer rooms and data centers consume large amounts of power, computer room air conditioner (CRAC) manufacturers, like Data Aire, have dedicated their engineering teams to research, to create the most scalable, flexible and energy efficient cooling solutions to meet their density needs.

Uptime Institute Density Chart

It boils down to this: to meet your density outlook and stay flexible, what kind of precision cooling system can support your need to maximize server space, minimize low pressure areas, reduce costs, and reduce requirements? You should be encouraged knowing that this ask is achievable in the same kinds of traditional ways, with no need to reinvent the wheel, or in this case, your environmental control system. There are a variety of solutions to be employed, whether DX or chilled water — lots of different form factors, one ton to many tons.

So, whether you’re thinking about chilled water for some facilities or DX solutions (refrigerant based solutions) for other types of facilities, both can achieve scale in the traditional perimeter cooling methodologies without the need for completely rethinking the way you manage your data center and the load coming from the servers. Chilled water solutions may be an option because those systems are getting much larger at the cooling unit level; satisfying the density increase simply by higher CFM per ton. Multi-fan arrays are very scalable. And you can scale down from 25 to 100 percent for the delivery, depending on whether you are trying to scale over the life of the buildout or you’re scaling back to the seasonality of the business for whomever is the IT consumer.

DX solutions are achievable from a good, better, best scenario. Good did the job back in the two to four kilowatt per rack days. However, nowadays, variable speed technologies are well established, and they can scale all the way from 25 to 100 percent just like chilled water.

At Data Aire, our engineers are seeing more dual cooling systems designed at the facility level. And so, dual cooling affords the redundancy of the infrastructure. Of course, that’s important in the data center world. And it also introduces the opportunity for economization.

Density, Efficiency and Economy of Scale

The entire concept of doing more with less — filling the buckets but still needing the environment and ecosystem to scale — is playing an important role in the transition operators are facing. With regards to greater airflow delivery per ton of cooling, it’s extremely achievable without the need to dramatically alter the way you operate your data center, which is essential because every operator is in transition mode. They are transitioning their IT architecture, their power side, and their cooling infrastructure. An efficient environment adapts to IT loads. The design horizon should keep scalable and efficient cooling infrastructure in mind to help future-proof for both known and unplanned density increases.

 

This article was originally published in Data Center Frontier.

In recent years, the conversation in the data center space has been shifting to density efficiency and supporting economy of scale in a sustainable manner. Check out a discussion between Bill Kleyman, EVP of Digital Solutions at Switch and Eric Jensen, VP/GM of Data Aire, where they predominantly focus on the topic of how increasing densities are impacting data centers.

Current Trends in Precision Cooling

Bill Kleyman:
It’s fascinating, Eric, to look at what’s been happening in the data center space over the past couple of years, where the importance and the value of the data center community has only continued to increase.

And a big part of the conversation, something that we’ve seen in the AFCOM reports published on Data Center Knowledge, is that we’re not really trying to build massive, big facilities. We’re trying to fill the buckets that we already have.

And so, the conversation is shifting to density efficiency, being able to support an economy of scale but also the ability to support that in a much more sustainable manner.

So that’s where we really kind of begin the conversation. But Eric, before we jump in for those that might not be familiar, who or what is Data Aire?

Eric Jensen:
Data Aire provides environmental control solutions for the data center industry, specifically precision cooling solutions, through a variety of strategies that are employed, whether DX or chilled water — lots of different form factors, one ton to many tons.

Bill Kleyman:
Perfect. You know the conversation around density. I’ve been hopping around some of these sessions here today at Data Center World and I’m not going to sugarcoat it, it’s been pretty prevalent, right? It’s almost nonstop and you know we’re going to talk about what makes cooling cool. You see what I did, since I’m a dad I could do those dad jokes. Thanks for everyone listening out there.

In what ways have you seen densities impact today’s data centers?

Eric Jensen:
So, I think the way that you opened up the discussion with filling the buckets is exactly right. There are opposing forces happening right now in the data center industry. It seems that while big facilities are getting bigger, there are also architectures that are trying to shrink footprints. So as a result, densities are increasing. And a lot of what we see is, that hits the news, or that is fun to talk about are the people who are doing high performance compute — 50, 70, 100 kw per rack. Those applications are out there.

But traditionally, the data center world for many years was two to four kw per rack…

Bill Kleyman:
Very true.

Eric Jensen:
And now that is increasing.  Data Aire has seen an issue of high density, and I think this is backed up by AFCOM’s State of the Data Center Report. And other reliable sources have corroborated the same thing, which is that densities are higher.

They’re higher today than they were previously and that’s posed some other challenges. We’re now we’re looking at maybe eight to 12, and people are designing for the future, which makes sense.

Nobody wants to get caught unawares three, five years down the road. So, it’s understandable to want to design for 12, 15 kw per rack. But the reality for many operators is still in that 6, 8, 10, 12 range —  and so how do you reconcile that? And that range is happening for a number of different reasons. It’s either because of the scaling of deployment over time as it gets built out or it’s because of the tenant’s type of business or the seasonality of their business.

Bill Kleyman:
You brought up a really good point. I really like some of those numbers you threw out there. So, the 2021 AFCOM State of the Data Center Report, which every AFCOM member has access to points out what you said, that the average rack density today is between 7 and 10 kilowatts per rack, and then some of those hyperscalers – 30, 40, 50, 60 kilowatts, talk about liquid cooling where they’re pushing triple digits and you start to really have an interesting conversation.

You said something really important in your last answer. Can you tell me how this impacts scale? The entire concept of doing more with less, filling the buckets but still needing the environment and ecosystem to scale?

Density, Efficiency and Economy of Scale

Eric Jensen:
Of course. So, you still have to still satisfy the density of load and it is achievable in the same kinds of traditional ways. However, it’s important to keep up with those form factors and that technology.

So, whether you’re talking about chilled water for some facilities or DX solutions, refrigerant based solutions for other types of facilities. Both can achieve scale in the traditional perimeter cooling methodologies without the need for really completely rethinking the way that you manage your data center and the load coming from those servers.

And so, whether if chilled water solutions are doing it today because those systems are getting much larger at the cooling unit level; that’s satisfied simply by higher CFM per ton.

With regards to greater airflow delivery per ton of cooling,  it’s extremely achievable without the need to dramatically alter the way you operate your data center, which is really important nowadays because every operator is in transition mode. They are transitioning their IT architecture, their power side, and also their cooling infrastructure.

It’s very doable now, as long as you are engineering-to-order. And so, whether it’s chilled water solutions, multi-fan arrays are very scalable. And you can scale down from 25 to 100 percent for the delivery, depending on whether you are trying to scale over the life of the buildout or you’re scaling back to the seasonality of the business for whomever is the IT consumer.

And if it’s DX solutions, refrigerant based solutions, that’s achievable from a good, better, best scenario. Good did the job back in the two to four kilowatt per rack days. However, nowadays, variable speed technologies are out there, and they can scale all the way from 25 to 100 percent just like chilled water.

What we’re seeing at Data Aire is that a lot of systems designed at the facility level are more dual cooling. And so, dual cooling affords the redundancy of the infrastructure. In the data center world, we like to see redundancy. But it also introduces the opportunity for economization.

Bill Kleyman:
You said a lot of really important things. Specifically, you said that we are in a transition.

I want everyone out here in the Data Center World live audience and everyone listening to us virtually to understand that we are in a transition. We genuinely are experiencing a shift in the data center space and this is a moment for everybody, I think, to kind of, you know, reflectively and respectively ask what does that transition look like for me? Am I trying to improve operations, am I trying to do efficiency, and does this transition need to be a nightmare?

From what you said, it really doesn’t. And that brings me to this next question.

We’ve talked about scalability. We’ve talked about how this difference differs across different kinds of cooling technologies and different kinds of form factors. And obviously, all these things come into play.

So, what new technologies are addressing these modern concerns and transitions?

Eric Jensen:
For what we see in the industry, those new technologies are less a matter of form function or form factor and much more at the elemental level. So, what we’re working on, I can only speak so much to…we’re working on nanotechnologies right now. And so, we’re bringing it down to the elemental level and that’s going to be able to mimic the thermal properties of water with non-water-based solutions.

Bill Kleyman:
You’re working on nanotechnology?

Eric Jensen:
Yes, sir.

Bill Kleyman:
And you just tell me this now at the end of our conversation?

Well, if you want to find out more about nanotechnology and what Data Aire is doing with that, please visit dataaire.com. Pick up the phone, give someone at Data Aire a call. I know we might not do that as often as we could. I’m definitely going to continue this conversation with you and learn more about the nanotech that you’re working on, but in the meantime thank you so much for joining us again.

Intelligent Data Center Cooling CRAC & CRAH Controls

In an ever-changing environment like the data center, it’s most beneficial to have as many intelligent systems working together as possible. It’s amazing to think of how far technology has come from the old supercomputers the size of four filing cabinets, to the present data centers that are pushing 1,000,000 Sq. Ft.

Managing a Data Center’s Needs

Historically, managing a data center was fairly straightforward. In all the growth, we find ourselves digging into the nuisances of every little thing data center cooling, power and rack space, among hundreds of other minute aspects. This is all way too much for a Data Center Manager to be able to manage and comprehend by themselves, so implementing systems that can talk to each other has become a must.

When evaluating the cooling side of the infrastructure, there are real challenges that may make you want to consider hiring a team of engineers to monitor your space constantly.

  • Most sensible room capacities vary constantly during the first year or two years build-out.
  • This creates a moving target for CRAC/CRAH systems to hit within precise setpoints, and this can create a lot of concern by data center managers about hot spots, consistent temperatures and CRAC usage.
  • Just when you think the build-out is done, someone in the team decides to start changing hardware and you’re headed down the path of continuous server swap outs and capacity changes.

It really can turn into a game of chasing your own tail, but it doesn’t have to.

Reduce Your Stress Level

Reduce Stress with data Aire Environmental Control

Data Aire has created the Zone Control controller to address the undue stress imposed on data center managers. Zone Control allows CRAC and CRAH units to communicate with each other and deduce the most efficient way possible to cool the room to precise set-points.

No longer will you or your colleagues need to continually adjust set-points. And as previously mentioned, it’s incredibly beneficial to have as many intelligent systems working together as possible. Zone Control is a creation of open communication and dialogue between all units on the team.

CRAC & CRAH Units Should Work Together Like an Olympic Bobsledding Team

I like using a sports analogy to illustrate this idea. Just like in sports, all players on the team must know their own personal role and how all players doing their part creates the most efficient team. As I watched the 2018 winter Olympics I started thinking about the similarities between a four-man bobsled team and how CRAC/CRAH units communicate through Zone Control.

Stay with me here…the bobsled team starts off very strong to give as much of a jump out of the box as possible. Then each member starts hopping into the bobsled in a specific order. Once they assume maximum speed, all members are in the bobsled and most are on cruise control — while the leader of the team steers them to the finish line. That’s Zone Control; the leader of the team.

Personal Precision Cooling Consultant

Let’s get back to data center cooling. When the units first start ramping up – they do so to ensure enough cooling immediately. Then as our controls/logic gets the readings of the room back, these units start to drop offline in standby mode to vary down to the needed capacity of the room. They are able to talk to each other to sense where the hotter parts of the room are to ensure the units closest to the load are running. Once they have gone through this process of checks and balances to prove the right cooling capacities, they go into cruise control as Zone Control continues to steer.

This creates the most efficient and most reliable setup of cooling in each individual data center as possible. Data center managers don’t need to worry about trying to find hot spots or worry about varying loads in the room. Zone Control is an intelligent communication control that works with CRAC/CRAH data room systems to identify the needs of the space and relay that message to the team. Think of it as your personal precision cooling consultant that always has the right system setup based on real-time capacities.

Add V3 Technology to the Zone Control Team

You can go even a step further in your quest to have the most efficient environmental control system safeguarding your data center. Pair Zone Control with gForce Ultra. The Ultra was designed with V3 Technology. It is the only system on the market to include a technology trifecta of Danfoss variable speed compressors accompanied by an EEV and variable speed EC fans. gForce Ultra can vary down to the precise capacity assignments in the data room. Combine the Ultra with Zone Control and you have the smartest and most efficient CRAC system in the industry. The Zone Control even has the logic to drop all Ultra units in a room down to a 40% capacity and run as a team in cruise control, versus running half the units at 80% because of the efficiency written in the logic.

If you are worrying about your data centers hot spots and CRAC usage, give us a call and we can get you setup with the most knowledgeable cooling brains around, the Zone Control.

If you’ve ever done anything even remotely related to HVAC, you’ve probably encountered ASHRAE at some point. The American Society of Heating, Refrigerating and Air-Conditioning Engineers is a widely influential organization that sets all sorts of industry guidelines. Though you don’t technically have to follow ASHRAE standards, doing so can make your systems a lot more effective and energy efficient. This guide will cover all the basics so that you can make sure your data centers get appropriate cooling.

What Are the ASHRAE Equipment Classes?

One of the key parts of ASHRAE Data Center Cooling Standards is the equipment classes. All basic IT equipment is divided into various classes based on what the equipment is and how it should run. If you’ve encountered ASHRAE standards before, you may already know a little about these classes. However, they have been updated recently, so it’s a good idea to go over them again, just in case. These classes are defined in ASHRAE TC 9.9.

  • A1: This class contains enterprise servers and other storage products. A1 equipment requires the strictest level of environmental control.
  • A2: A2 equipment is general volume servers, storage products, personal computers, and workstations.
  • A3: A3 is fairly similar to the A2 class, containing a lot of personal computers, private workstations, and volume servers. However, A3 equipment can withstand a far broader range of temperatures.
  • A4: This has the broadest range of allowable temperatures. It applies to certain types of IT equipment like personal computers, storage products, workstations, and volume servers.[1]

Recommended Temperature and Humidity for ASHRAE Classes

The primary purpose of ASHRAE classes is to figure out what operating conditions equipment needs. Once you use ASHRAE resources to find the right class for a specific product, you just need to ensure the server room climate is meeting these needs.

First of all, the server room’s overall temperature needs to meet ASHRAE standards for its class. ASHRAE standards always recommend that equipment be kept between 18 to 27 degrees Celsius when possible. However, each class has a much broader allowable operating range.[1] These guidelines are:

  • A1: Operating temperatures should be between 15°C (59°F) to 32°C (89.6°F).
  • A2: Operating temperatures should be between 10°C (50°F) to 35°C (95°F).
  • A3: Operating temperatures should be between 5°C (41°F) to 40°C (104°F).
  • A4: Operating temperatures should be between 5°C (41°F) to 45°C (113°F).[1]

You also need to pay close attention to humidity. Humidity is a little more complex to measure than temperature. Technicians will need to look at both dew point, which is the temperature when the air is saturated, and relative humidity, which is the percent the air is saturated at any given temperature.[2] Humidity standards for ASHRAE classes are as follows:

  • A1: Maximum dew point should be no more than 17°C (62.6°F). Relative humidity should be between 20% and 80%.
  • A2: Maximum dew point should be no more than 21°C (69.8°F). Relative humidity should be between 20% and 80%.
  • A3: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 85%.
  • A4: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 90%.[1]

Tips for Designing Rooms to Meet ASHRAE Data Center Cooling Standards

As you can see, ASHRAE guidelines are fairly broad. Just about any quality precision cooling system can easily achieve ASHRAE standards in a data center. However, a good design should do more than just consistently hit a temperature range. Planning the right design carefully can help reduce energy usage and make it easier to work in the data center. There are all sorts of factors you will need to consider.

Since most companies also want to save energy, it can be tempting to design a cooling system that operates toward the maximum allowable ASHRAE guidelines. However, higher operating temperatures can end up shortening equipment’s life span and causing inefficiently operated technology to use more power.[3] Carefully analyzing these costs can help companies find the right temperature range for their system.

Once you have a desired temperature set, it’s time to start looking at some cooling products. CRAC and CRAH units are always a reliable and effective option for data centers of all sizes. Another increasingly popular approach is a fluid cooler system that uses fluid to disperse heat away from high temperature systems. Many companies in cooler climates are also switching to environmental economizer cooling systems that pull in cold air from the outdoors.[3]

Much of data center design focuses on arranging HVAC products in a way that provides extra efficiency. Setting up hot and cold aisles can be a simple and beneficial technique. This involves placing server aisles back-to-back so the hot air that vents out the back flows in a single stream to the exit vent. You may also want to consider a raised floor configuration, where cold air enters through a floor cooling unit. This employs heat’s tendency to rise, so cooling air is pulled throughout the room.[4] By carefully designing airflow and product placement, you can achieve ASHRAE standards while improving efficiency.

Data Aire Is Here to Help

If you have any questions about following ASHRAE Data Center Cooling Standards, turn to the experts! At Data Aire, all of our technicians are fully trained in the latest ASHRAE standards. We are happy to explain the standards to you in depth and help you meet these standards for your data room. Our precision cooling solutions provide both advanced environmental control and efficient energy usage.

 

 

References:

[1] https://www.chiltrix.com/documents/HP-ASHRAE.pdf
[2] https://www.chicagotribune.com/weather/ct-wea-0907-asktom-20160906-column.html
[3] https://www.ibm.com/downloads/cas/1Q94RPGE
[4] https://www.simscale.com/blog/2018/02/data-center-cooling-ashrae-90-4/

data center cooling

It’s vital to keep your data center environment optimal to promote peak performance.

Data center cooling is a $20 billion industry. Cooling is the highest operational cost aside from the ITE load itself. It’s also the most important maintenance feature.

There are a few data center cooling best practices that can keep your data center humming along smoothly. These practices can help you to improve the efficiency of your data center cooling system. They can also help you to reduce costs.

It’s important to execute any changes to your data center cooling system carefully. For this reason, it’s vital to work with an experienced engineer before making any changes in a live environment.

To learn more about data center cooling best practices, continue reading.

The State of Data Center Environmental Control

Today, data center environmental control is one of the most widely discussed topics in the IT space. Also, there’s a growing discrepancy between older data centers and new hyperscale facilities. Despite age or scale, however, power utilization and efficiency are critical in any data center.

It’s well-known that data centers are one of the largest consumers of electricity around the world. Today, data centers used up to 1% to 1.5% of all the world’s energy. What’s more, energy usage will only increase as more innovations emerge. These innovations include:

  • Artificial intelligence
  • Cloud services
  • Edge computing
  • IoT

Furthermore, these items represent only a handful of emerging tech.

Over time, the efficiency of technology improves. However, those gains are offset by the never-ending demand for increased computing and storage space. Firms need data centers to store information that enables them to satisfy consumer and business demands.

Accordingly, data center power density needs will increase every year. Currently, the average rack power density is about 7 kW. Some power racks have a density of as much as 15 kW to 16 kW per rack. However, high-performance computing is demanding typically up 40-50 kw per rack.

These numbers are driving data centers to source the most energy efficient cooling systems available.

What Is the Recommended Temperature for a Data Center?

The American Center for Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) offers an answer to this question. ASHRAE suggests server inlet temperatures between 64.4° and 80.6°F. Furthermore, the society recommends a relative humidity between 20% and 80%.

The Uptime Institute, however, has a different opinion.

The Institute recommends an upper temp limit of 77°F.

However, many data centers run much cooler, especially older ones. IT workers prefer to err on the side of caution to avoid overheating equipment.

Data Center Cooling Calculations

Data Center IT It’s important to understand current conditions before making your data center cooling calculations. For example, you’ll need to assess the current IT load in kilowatts. You’ll also need to measure the intake temperature across your data center. This measurement should include any hotspots.

At a minimum, you want to record the temperature at mid-height. You’ll also want to record the temperatures at the end of each row of racks. Also, you should take the temperature at the top of the rack in the center of each row.

As you take measurements, record the location, temp, date, and time. You’ll need this information later for comparison.

Now, measure the power draw of your cooling unit in kilowatts. Typically, you’ll find a dedicated panel for this measurement on most units. You could also use a separate monitoring system to take this measurement.

You’ll also need to measure the room’s sensible cooling load. You’ll need to measure the airflow volume for each cooling unit for this task. Also, you’ll need to record the supply and return temperatures for each active unit.

Getting to the Math

You can determine a reasonable capacity for each operating unit and kilowatts using the following formula:

Q sensible (kW) = 0.316*CFM*(Return Temperature[°F] – Supply Temperature[°F])/1000
[Q sensible (kW) = 1.21*CMH*(Return Temperature[°C] – Supply Temperature[°C])/3600]

Now, you can compare the cooling load to the IT load to create a point of reference.

Next, you’ll make use of the airflow and return air temperature measurements. You’ll need to contact your equipment vendor for the sensible capacity of each unit in kilowatts. Now, total the sensible capacity for the units that are now in operation. This is about the most simplistic calculation that you’ll find. If you prefer, however, you can find much more complex methods for calculation online.

Next, you’ll need the overall room operating cooling sensible capacity and kilowatts from your measurements. You’ll divide the former by the latter to find the sensible operating cooling load. Now, you have a ratio to use as a benchmark to evaluate subsequent improvements.

Still, it’s important to consult with IT engineers. They can help you determine the maximal allowable intake temperature that will not damage your IT equipment in a new environment. Using your collected data, you can create a work plan to establish your goals. You can also use the information to determine metrics that you’ll monitor to ensure that the cooling environment functions properly.

You’ll also want to develop a back-out plan just in case you have any problems along the way. Finally, you want to pinpoint the performance metrics that you’ll track. For instance, you might track inlet temperatures. Conversely, you may monitor power consumption or other metrics.

Data Center Cooling Best Practices

It can prove challenging to figure out where to start with upgrades for data center environmental control. A few data center cooling best practices can help in this regard. There are many variables that can affect the airflow in your data center. These variables may include the types of data racks. They can even include the cable openings. By following airflow best management practices, however, you can avoid equipment failures. The following strategies can help boost your data center airflow management for improved efficiency:

  • Manage the cooling infrastructurecloud data center
  • Block open spaces to prevent air bypass
  • Manage data center raised floors

What follows are details for these strategies.

Best Practice 1: Manage the Cooling Infrastructure

Data centers use a lot of electricity. For this reason, they need an intense cooling infrastructure to keep everything working correctly. To put this in perspective, according to the US Department of Commerce, the power densities of these facilities, measured in kilowatts (kW) per square foot (ft2) of building space, can be nearly 40 times higher than the power densities of commercial office buildings.

If you need to improve the airflow in your data center, you may want to consider changing the cooling infrastructure. For example, you may reduce the number of operating cooling units to meet the needed capacity. Alternatively, you might raise the temperature without going over your server intake air temperature maximum.

Best Practice 2: Block Open Spaces

It’s vital to close all open spaces under your racks. It’s also important to close open spaces in the vertical planes of your IT equipment intakes.

You must also close any open spaces in your server racks and rows. Spaces here can cause your airflow balance to get skewed.

Also, you’ll want to seal any spaces underneath and on the sides of cabinets as well as between mounting rails. You’ll also want to install rack grommets and blanking panels. In this way, you’ll ensure that there aren’t any unwanted gaps between your cabinets.

Best Practice 3: Manage Data Center Raised Floors

Also, you’ll want to monitor the open area of the horizontal plane of your raised floor. Openings in your raised floor can bypass airflow. This circumstance can also skew the airflow balance in your data center.

You’ll want to manage the perforated tile placement on your raised floor to avoid this problem. You must also seal cable openings with brushes and grommets. Finally, you’ll need to inspect the perimeter walls underneath the raised floor for partition penetrations or gaps.

Choosing a Data Center Cooling Design

There are a few emerging data center cooling methods in the computer room air conditioning (CRAC) space, such as data center water cooling. For example, you might want to consider advanced climate controls to manage airflow.

State-of-the-art data centers incorporate new ways to optimize the cooling infrastructure for greater efficiency. Now, you can enjoy precision data center environmental control with several technologies. These technologies include:

Usually, the data center cooling methods that you choose are driven by site conditions. An experienced consultant can help you to select the right data center cooling design.

Your Partner in Data Center Air Control

Now you know more about data center cooling best practices. What you need now is a well-qualified expert in data center cooling. Data Aire has more than 50 years of experience. We’ve helped firms find innovative answers for  emerging demands.

At Data Aire, we’re a solutions-driven organization with a passion for creativity. Furthermore, we believe in working closely with our clients during the consultative process. We can give you access to extensive expertise and control logic. By partnering with us, you’ll enjoy world-class manufacturing capability recognized by leading international quality certifications.

Contact Data Aire today at (800) 347-2473 or connect with us online to learn more about our consultative approach to helping you choose the most appropriate environmental control system your data center.

What’s driving Data Aire to provide more efficient and flexible precision cooling systems to the market?

Applications Engineering Manager, Dan McInnis, answers this and other important questions.

What is an HVAC economizer?

An HVAC economizer is a device that is used to save energy consumption. It typically works in concert with an air conditioner. Together, this solution helps minimize power usage. During the cooler months of the year, in many locations, the outdoor ambient air is cooler than the air in the building. Economization is accomplished by taking advantage of that temperature difference between indoor and outdoor ambient conditions, rather than running compressors to provide the cooling.

What is the difference between an airside and a waterside economizer?

An Airside Economizer brings cool air from outdoors into a building and distributes it to the servers. Instead of being re-circulated and cooled, the exhaust air from the servers is simply directed outside. For data centers with water- or air-cooled chilled water plants, a Waterside Economizer uses the evaporative cooling capacity of a cooling tower to produce chilled water and can be used instead of the chiller during the winter months.

Why are HVAC economizer solutions more important than ever?

Economization Cooling Solution

Understand scalable and efficient economizer solutions for data center growth.  Download the guide.

Considering the rising energy costs, HVAC economizer solutions have become a primary concern for mechanical engineers and data center managers I speak with. They must consider energy availability, especially from urban utility providers. Likewise, they need to think about how much money can be saved versus how much energy is being consumed.

In addition, I’m frequently asked about changing state codes and requirements, which force engineers to examine their application designs to assure they meet current standards. It’s become apparent, from the interactions Data Aire has, that our customers are seeking an efficient precision cooling system that can greatly reduce their total cost of ownership.

When are HVAC economizers effective and what should specifying engineers take note of when choosing between an airside or waterside economizer or a pumped refrigerant option?

During the cooler months of the year, in many locations, the outdoor ambient air is cooler than the air in the building. You can accomplish economization by taking advantage of that temperature difference between indoor and outdoor ambient conditions, rather than running compressors to provide the cooling. Airside economization can be accomplished directly by pulling that cool or dry air straight into the building, which is the simplest and most efficient option in many cases.

Waterside economization uses an indirect method of economization and pulls cool water from a cooling tower or dry cooler that is cooled by outdoor air and runs the water through coils inside the HVAC units in the building. Pumped refrigerant also takes advantage of the temperature difference by running a low-pressure refrigerant pump rather than a compressor as the pump consumes less energy, although this solution is less efficient than waterside economization as it uses refrigerant-based heat transfer rather than water.

What examples can you provide that show waterside economization to be efficient across different climate zones?

An example of the efficiency gained from a waterside economizer can be seen with Data Aire’s gForce Ultra, which provides economization (full and partial) for 68% of the year in a dry climate such as Phoenix. Or a humid climate like Ashburn, Va., realizes economization for 75% of the year. Likewise, 98% of the year sees economization in a dry, subtropical climate such as Los Angeles.

View an example of how economization is efficient in data centers.

What new or existing requirements are affecting economization considerations?

Industry standards are under continuous maintenance with numerous energy-savings measures being introduced regularly. ASHRAE Standard 90.1 outlines economizer requirements for new buildings, additions to existing buildings and alterations to HVAC systems in existing buildings. For each cooling system, an airside economizer or fluid economizer is required. Exceptions to this exist, which are outlined in Standard 90.1. When airside economizers are in

place, they must provide up to 100% of the supply air as outdoor air for cooling. Fluid economizers shall be able to provide 100% of the cooling load when outdoor conditions are below a specific range.

Other notable changes include updated climate zone classifications from ASHRAE 169, mandatory requirements for equipment replacements or alterations, which include economization and integrated economizer control and fault detection in direct expansion equipment.

Another important standard is California’s title 24 Energy Standard, which has additional requirements for code compliance on both air and waterside economizers. In additions to standards, numerous technical committees provide recommendations that are beneficial to the performance. ASHRAE TC 9.9 is a technical committee that provides guidelines with updated envelopes for temperature and humidity class ratings. These updates are based on improved equipment ratings.

As Applications Engineering Manager at Data Aire, Dan oversees a team that reviews and modifies HVAC designs submitted to the company by specifying engineers.  In addition, he approves, releases and manages CAD drawings for mechanical, refrigeration and piping projects.  

When you’re looking to find the best precision cooling equipment for the environmental control infrastructure of your facility, you will quickly find that precision cooling equipment comes in two main categories: direct expansion system (DX) or chilled water air conditioning system (CHW).

Typically, the decision regarding which cooling source is better for a data center is usually driven by the job site conditions. However, selecting the right HVAC system for your mission critical facility can be a challenging process driven by many factors.

While a DX system is the HVAC air conditioning unit most commonly used for residential buildings or small commercial buildings, it is also selected to control a data center’s environment. Read on to learn the basics of choosing DX units for your environmental control.

What’s the Difference Between DX Units and Chilled Water Units?

The immediate and most noteworthy difference between these two systems is that the DX units cool air using refrigerant and CHW units cool air utilizing chilled water.

A DX unit uses refrigerant-based cooling and cools indoor air using a condensed refrigerant liquid. Direct Expansion means that the refrigerant expands to produce the cooling effect in a coil that is in direct contact with the conditioned air that will be delivered to the space. The DX unit uses a refrigerant vapor expansion and compression cycle to cool air coming in through a supply plenum and returns it to the area that needs cooling through the return.

This central air conditioning system comes in either a split-system or a packaged unit. In a split system the components are separated with the evaporator coil located in an indoor cabinet and condenser coil located in an outdoor cabinet. The compressor can either be in the indoor or outdoor cabinet. A packaged unit has the entire cooling system self-contained in one unit, with the evaporator coil, condenser, and compressor all located in one cabinet. This allows for flexibility in the installation since the unit can be either outside or indoors (depending on system specifications) without too large of a footprint.

The Benefits of a DX System

Broad Spectrum of Applications

Computer Server RoomDX systems offer a high degree of flexibility providing precision cooling at varying load conditions. The system can be located inside or outside of the building and the system itself be expanded in order to adapt to new building requirements or size. Individual sections can be operated without running the entire system in the building. The DX valve can reduce or stop the movement of the refrigerant to each indoor unit. This results in the ability to control each room independently. DX systems may occupy less space than other cooling systems, as is the case with InRow systems. If there are large air conditioning loads, then multiple units can be installed. In cases where there is lesser heat load, one of the units can be shut down and the other can run at full load to accommodate varying load conditions. gForce Ultra is an example of a system with variable speed technology — offering greater capacity modulation. At any given time, the Ultra allows users to ramp down the unit’s energy consumption to precisely meet a facility’s load demand.

Installation Costs

DX units (with their condensers) are complete systems, which are not reliant on other sets of equipment like cooling towers and condenser water pumps. Because they do not require additional equipment or systems, they come with lower installation costs. Chillers utilize external cooling towers to transfer heat to the atmosphere, and these structures can cost more to build, and they utilize valuable real estate which adds to the cost. The extra parts and equipment in water-cooled chillers also make installation more complicated which can mean higher upfront costs and higher labor costs for installation. CHW units also require a separate mechanical room to house the system to ensure the chiller will function properly with its cooling tower and extra components.

Good Relative Humidity Control

Dehumidification is very manageable with DX units; with low refrigerant temperatures you can pull moisture out of the easily. An increase in wet-bulb temperature corresponds with increased operating costs as well as lower comfort levels due to the higher relative humidity. In climates with high prevailing humidity, air cooled systems are good at extracting moisture from the air.

As energy savings is increasingly becoming a major issue in data centers, it’s important to make an informed decision in selecting your environmental control system. At Data Aire, we manufacture the widest variety of computer room air conditioners and air handlers to meet the demanding challenges of today’s most mission critical environments. Whether you need a comprehensive cooling system or need to upgrade your current equipment, we carry an extensive catalog of solutions to meet your thermal control needs.