Whitepaper to learn about the Department of Energy's (DOE) current standards for the efficiency ratings of a CRAC

If you’ve ever done anything even remotely related to HVAC, you’ve probably encountered ASHRAE at some point. The American Society of Heating, Refrigerating and Air-Conditioning Engineers is a widely influential organization that sets all sorts of industry guidelines. Though you don’t technically have to follow ASHRAE standards, doing so can make your systems a lot more effective and energy efficient. This guide will cover all the basics so that you can make sure your data centers get appropriate cooling.

What Are the ASHRAE Equipment Classes?

One of the key parts of ASHRAE Data Center Cooling Standards is the equipment classes. All basic IT equipment is divided into various classes based on what the equipment is and how it should run. If you’ve encountered ASHRAE standards before, you may already know a little about these classes. However, they have been updated recently, so it’s a good idea to go over them again, just in case. These classes are defined in ASHRAE TC 9.9.

  • A1: This class contains enterprise servers and other storage products. A1 equipment requires the strictest level of environmental control.
  • A2: A2 equipment is general volume servers, storage products, personal computers, and workstations.
  • A3: A3 is fairly similar to the A2 class, containing a lot of personal computers, private workstations, and volume servers. However, A3 equipment can withstand a far broader range of temperatures.
  • A4: This has the broadest range of allowable temperatures. It applies to certain types of IT equipment like personal computers, storage products, workstations, and volume servers.[1]

Recommended Temperature and Humidity for ASHRAE Classes

The primary purpose of ASHRAE classes is to figure out what operating conditions equipment needs. Once you use ASHRAE resources to find the right class for a specific product, you just need to ensure the server room climate is meeting these needs.

First of all, the server room’s overall temperature needs to meet ASHRAE standards for its class. ASHRAE standards always recommend that equipment be kept between 18 to 27 degrees Celsius when possible. However, each class has a much broader allowable operating range.[1] These guidelines are:

  • A1: Operating temperatures should be between 15°C (59°F) to 32°C (89.6°F).
  • A2: Operating temperatures should be between 10°C (50°F) to 35°C (95°F).
  • A3: Operating temperatures should be between 5°C (41°F) to 40°C (104°F).
  • A4: Operating temperatures should be between 5°C (41°F) to 45°C (113°F).[1]

You also need to pay close attention to humidity. Humidity is a little more complex to measure than temperature. Technicians will need to look at both dew point, which is the temperature when the air is saturated, and relative humidity, which is the percent the air is saturated at any given temperature.[2] Humidity standards for ASHRAE classes are as follows:

  • A1: Maximum dew point should be no more than 17°C (62.6°F). Relative humidity should be between 20% and 80%.
  • A2: Maximum dew point should be no more than 21°C (69.8°F). Relative humidity should be between 20% and 80%.
  • A3: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 85%.
  • A4: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 90%.[1]

Tips for Designing Rooms to Meet ASHRAE Data Center Cooling Standards

As you can see, ASHRAE guidelines are fairly broad. Just about any quality precision cooling system can easily achieve ASHRAE standards in a data center. However, a good design should do more than just consistently hit a temperature range. Planning the right design carefully can help reduce energy usage and make it easier to work in the data center. There are all sorts of factors you will need to consider.

Since most companies also want to save energy, it can be tempting to design a cooling system that operates toward the maximum allowable ASHRAE guidelines. However, higher operating temperatures can end up shortening equipment’s life span and causing inefficiently operated technology to use more power.[3] Carefully analyzing these costs can help companies find the right temperature range for their system.

Once you have a desired temperature set, it’s time to start looking at some cooling products. CRAC and CRAH units are always a reliable and effective option for data centers of all sizes. Another increasingly popular approach is a fluid cooler system that uses fluid to disperse heat away from high temperature systems. Many companies in cooler climates are also switching to environmental economizer cooling systems that pull in cold air from the outdoors.[3]

Much of data center design focuses on arranging HVAC products in a way that provides extra efficiency. Setting up hot and cold aisles can be a simple and beneficial technique. This involves placing server aisles back-to-back so the hot air that vents out the back flows in a single stream to the exit vent. You may also want to consider a raised floor configuration, where cold air enters through a floor cooling unit. This employs heat’s tendency to rise, so cooling air is pulled throughout the room.[4] By carefully designing airflow and product placement, you can achieve ASHRAE standards while improving efficiency.

Data Aire Is Here to Help

If you have any questions about following ASHRAE Data Center Cooling Standards, turn to the experts! At Data Aire, all of our technicians are fully trained in the latest ASHRAE standards. We are happy to explain the standards to you in depth and help you meet these standards for your data room. Our precision cooling solutions provide both advanced environmental control and efficient energy usage.

 

 

References:

[1] https://www.chiltrix.com/documents/HP-ASHRAE.pdf
[2] https://www.chicagotribune.com/weather/ct-wea-0907-asktom-20160906-column.html
[3] https://www.ibm.com/downloads/cas/1Q94RPGE
[4] https://www.simscale.com/blog/2018/02/data-center-cooling-ashrae-90-4/

data center cooling

It’s vital to keep your data center environment optimal to promote peak performance.

Data center cooling is a $20 billion industry. Cooling is the highest operational cost aside from the ITE load itself. It’s also the most important maintenance feature.

There are a few data center cooling best practices that can keep your data center humming along smoothly. These practices can help you to improve the efficiency of your data center cooling system. They can also help you to reduce costs.

It’s important to execute any changes to your data center cooling system carefully. For this reason, it’s vital to work with an experienced engineer before making any changes in a live environment.

To learn more about data center cooling best practices, continue reading.

The State of Data Center Environmental Control

Today, data center environmental control is one of the most widely discussed topics in the IT space. Also, there’s a growing discrepancy between older data centers and new hyperscale facilities. Despite age or scale, however, power utilization and efficiency are critical in any data center.

It’s well-known that data centers are one of the largest consumers of electricity around the world. Today, data centers used up to 1% to 1.5% of all the world’s energy. What’s more, energy usage will only increase as more innovations emerge. These innovations include:

  • Artificial intelligence
  • Cloud services
  • Edge computing
  • IoT

Furthermore, these items represent only a handful of emerging tech.

Over time, the efficiency of technology improves. However, those gains are offset by the never-ending demand for increased computing and storage space. Firms need data centers to store information that enables them to satisfy consumer and business demands.

Accordingly, data center power density needs will increase every year. Currently, the average rack power density is about 7 kW. Some power racks have a density of as much as 15 kW to 16 kW per rack. However, high-performance computing is demanding typically up 40-50 kw per rack.

These numbers are driving data centers to source the most energy efficient cooling systems available.

What Is the Recommended Temperature for a Data Center?

The American Center for Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) offers an answer to this question. ASHRAE suggests server inlet temperatures between 64.4° and 80.6°F. Furthermore, the society recommends a relative humidity between 20% and 80%.

The Uptime Institute, however, has a different opinion.

The Institute recommends an upper temp limit of 77°F.

However, many data centers run much cooler, especially older ones. IT workers prefer to err on the side of caution to avoid overheating equipment.

Data Center Cooling Calculations

Data Center IT It’s important to understand current conditions before making your data center cooling calculations. For example, you’ll need to assess the current IT load in kilowatts. You’ll also need to measure the intake temperature across your data center. This measurement should include any hotspots.

At a minimum, you want to record the temperature at mid-height. You’ll also want to record the temperatures at the end of each row of racks. Also, you should take the temperature at the top of the rack in the center of each row.

As you take measurements, record the location, temp, date, and time. You’ll need this information later for comparison.

Now, measure the power draw of your cooling unit in kilowatts. Typically, you’ll find a dedicated panel for this measurement on most units. You could also use a separate monitoring system to take this measurement.

You’ll also need to measure the room’s sensible cooling load. You’ll need to measure the airflow volume for each cooling unit for this task. Also, you’ll need to record the supply and return temperatures for each active unit.

Getting to the Math

You can determine a reasonable capacity for each operating unit and kilowatts using the following formula:

Q sensible (kW) = 0.316*CFM*(Return Temperature[°F] – Supply Temperature[°F])/1000
[Q sensible (kW) = 1.21*CMH*(Return Temperature[°C] – Supply Temperature[°C])/3600]

Now, you can compare the cooling load to the IT load to create a point of reference.

Next, you’ll make use of the airflow and return air temperature measurements. You’ll need to contact your equipment vendor for the sensible capacity of each unit in kilowatts. Now, total the sensible capacity for the units that are now in operation. This is about the most simplistic calculation that you’ll find. If you prefer, however, you can find much more complex methods for calculation online.

Next, you’ll need the overall room operating cooling sensible capacity and kilowatts from your measurements. You’ll divide the former by the latter to find the sensible operating cooling load. Now, you have a ratio to use as a benchmark to evaluate subsequent improvements.

Still, it’s important to consult with IT engineers. They can help you determine the maximal allowable intake temperature that will not damage your IT equipment in a new environment. Using your collected data, you can create a work plan to establish your goals. You can also use the information to determine metrics that you’ll monitor to ensure that the cooling environment functions properly.

You’ll also want to develop a back-out plan just in case you have any problems along the way. Finally, you want to pinpoint the performance metrics that you’ll track. For instance, you might track inlet temperatures. Conversely, you may monitor power consumption or other metrics.

Data Center Cooling Best Practices

It can prove challenging to figure out where to start with upgrades for data center environmental control. A few data center cooling best practices can help in this regard. There are many variables that can affect the airflow in your data center. These variables may include the types of data racks. They can even include the cable openings. By following airflow best management practices, however, you can avoid equipment failures. The following strategies can help boost your data center airflow management for improved efficiency:

  • Manage the cooling infrastructurecloud data center
  • Block open spaces to prevent air bypass
  • Manage data center raised floors

What follows are details for these strategies.

Best Practice 1: Manage the Cooling Infrastructure

Data centers use a lot of electricity. For this reason, they need an intense cooling infrastructure to keep everything working correctly. To put this in perspective, according to the US Department of Commerce, the power densities of these facilities, measured in kilowatts (kW) per square foot (ft2) of building space, can be nearly 40 times higher than the power densities of commercial office buildings.

If you need to improve the airflow in your data center, you may want to consider changing the cooling infrastructure. For example, you may reduce the number of operating cooling units to meet the needed capacity. Alternatively, you might raise the temperature without going over your server intake air temperature maximum.

Best Practice 2: Block Open Spaces

It’s vital to close all open spaces under your racks. It’s also important to close open spaces in the vertical planes of your IT equipment intakes.

You must also close any open spaces in your server racks and rows. Spaces here can cause your airflow balance to get skewed.

Also, you’ll want to seal any spaces underneath and on the sides of cabinets as well as between mounting rails. You’ll also want to install rack grommets and blanking panels. In this way, you’ll ensure that there aren’t any unwanted gaps between your cabinets.

Best Practice 3: Manage Data Center Raised Floors

Also, you’ll want to monitor the open area of the horizontal plane of your raised floor. Openings in your raised floor can bypass airflow. This circumstance can also skew the airflow balance in your data center.

You’ll want to manage the perforated tile placement on your raised floor to avoid this problem. You must also seal cable openings with brushes and grommets. Finally, you’ll need to inspect the perimeter walls underneath the raised floor for partition penetrations or gaps.

Choosing a Data Center Cooling Design

There are a few emerging data center cooling methods in the computer room air conditioning (CRAC) space, such as data center water cooling. For example, you might want to consider advanced climate controls to manage airflow.

State-of-the-art data centers incorporate new ways to optimize the cooling infrastructure for greater efficiency. Now, you can enjoy precision data center environmental control with several technologies. These technologies include:

Usually, the data center cooling methods that you choose are driven by site conditions. An experienced consultant can help you to select the right data center cooling design.

Your Partner in Data Center Air Control

Now you know more about data center cooling best practices. What you need now is a well-qualified expert in data center cooling. Data Aire has more than 50 years of experience. We’ve helped firms find innovative answers for  emerging demands.

At Data Aire, we’re a solutions-driven organization with a passion for creativity. Furthermore, we believe in working closely with our clients during the consultative process. We can give you access to extensive expertise and control logic. By partnering with us, you’ll enjoy world-class manufacturing capability recognized by leading international quality certifications.

Contact Data Aire today at (800) 347-2473 or connect with us online to learn more about our consultative approach to helping you choose the most appropriate environmental control system your data center.

Data center numbers are growing – but is your efficiency falling?

The latest AFCOM State of the Data Center report, post-COVID, indicates strong growth in the data center space. This includes cloud, edge, and even colocation space. However, it also noted that many are looking even deeper into efficiency as demand for data, space, and power continues to grow. This special FastChat looks at the very latest data center trends and outlines some of the top data center efficiency designs.

Specifically, you’ll learn about:

  • The latest trends just released from the AFCOM State of the Data Center Report surrounding data center growth and efficiency
  • With speed and scale, come challenges with compromises and cooling
  • Top 3 List: Know what to ask your vendor
  • Top 3 List: Understand which technologies are helping evolve the efficiency of our industry

Learn more about economization solutions from Data Aire. And discover how we can help you scale your data center at your desired pace.

Read our latest guide, which highlights proven ways to conserve energy in your data center.

Adaptive reuse began as a plan to convert classic buildings either for their charm, or to boost the economic preservation of historical buildings. Today, adaptive reuse has a much more pragmatic purpose – enhancing our physical and digital infrastructure. As data creation and demand explodes, owners and developers are finding clever ways to adapt existing structures into data centers.

Join data center leaders and industry innovators explore adaptive reuse, what it means for the industry today and its potential for data centers in the future.

Executive Roundtable

Moderator:
Eric Jensen, VP and GM, Data Aire

Panelists:
Craig Deering, Director of Design and Construction, CyrusOne
Mitchell Fonseca, Senior VP, and GM of Data Center Services, Cyxtera Technologies
Michael Silla, Senior VP, Design Construction, Flexential

Eric Jensen: Adaptive reuse is of course not a new topic. It’s prevalent in commercial real estate, as well as residential. It also happens in the data center sector. Some of the ideas around adaptive reuse as it relates specifically to data center spaces are the market drivers. Maybe that’s the geography relative to the end user or the scale of the facility. Maybe it’s the types of facilities that are being considered for adaptive reuse. What makes a good candidate site versus what doesn’t may dictate how we prioritize among the variety of drivers. And then of course, how we execute on adapting a site for reuse as a data center is a consideration.

Let’s start with you Mitch. You mentioned that you were a believer in the future scarcity of viable data center space. So I’m curious, what kinds of data center space do you expect to become scarce, and then what options do you foresee builders considering to resolve that?

What Buildings Are Good Candidates for Adaptive Reuse?

Mitchell Fonseca: As data centers have become larger, more prevalent within different markets, the best space becomes more scarce, particularly in urban areas where we see a lot of legacy telco hotels. To build additional data centers within that space becomes complicated. Finding the right facility that has the right power and the right connectivity is a challenge. So, adaptive reuse becomes more critical as you look at the potential data center footprint, or pieces of land, buildings, warehouses, and office buildings that could potentially be retrofitted. You need to become flexible with what your designs look like. It’s definitely something that’s top of mind whenever we go out to look at properties. Reuse is more prevalent in urban areas. And, and from my perspective, a lot of the warehouse types of spaces tend to be more flexible when it comes to data center retrofits. Office buildings are obviously a lot more challenging. When it comes down to it, one of the biggest challenges we run into is clear heights. So you need the right building that has the right clear heights to be able to support your data center environment.

Eric Jensen: That’s one side of serving the end user. But I think Craig, you and CyrusOne maybe come at it from a little bit of a different angle as far as what targeted end user you’re trying to serve. What is the availability looking like for the future of data center spaces and how we might need to adapt existing spaces into data centers?

Craig Deering: At CyrusOne, I have converted two warehouse spaces based on end user requirements, and both of those were mainly driven by speed to market. In that, I could avoid a site plan application and I was able to move very quickly to building permits. But going back in time it was actually quite common (particularly based on scale) to adapt office space. Going back to some of my early client base, like Above Net and some of the academic, medical and high performance computing environments that have been successively converted to data centers…it’s almost universally been office space. I see more of that happening in the marketplace as people start talking about workloads and workloads needing to move to the edge, primarily to manage the end user experience and also the communication costs. I see it as a growing market and people looking towards the past to see how some of those problems — like clear height, generator emissions and other things were solved before we started building these massive hyperscale centers as greenfields.

Why is Manufacturing Speed-to-Market an Important Driver?

Eric Jensen: So Mike, I’m wondering…coming at it from your experience, what are you seeing as the driver for specific targeted audiences who need data center space? Whether it be enterprise or co-location or wholesale, what are you seeing as sweet spots and how do those market dynamics drive the need for reuse of existing space that isn’t currently data center?

Michael Silla: I think speed to market is a big driver. The end user is challenged with capacity planning, right? Technology is growing at such a fast pace and that’s the reason why they’re coming to us and the third party markets to help solve their problems. I think Mitch brought up available properties, scarcity of available properties. Where there’s available power today, plus what the utility could scale to. Going back 10, 15 years, or a little longer, everybody left New York and came to New Jersey. PSE&G was right there to build at capacity for a lot of the data centers that entered the region.
And we watched that follow suit around the country. Power companies know that we’re not going to take all 36 or 72 megawatts on day one. We’re all in the same neighborhoods and they’re rationing out the power and they’re building as they go. It’s a business model, too and the reality is we’re not going to find greenfields. So we have to take a look at adaptive reuse of brownfields. And again, we’re looking to find a viable facility with floor load, ceiling heights — your height slab to slab; what’s the minimum requirement that you can make work. We tend to focus on what is our design of record. And how do we adapt that? Whether it’s greenfield, brownfield, multi-story, or campus environments, we need to build that flexibility into the process, having a standard that we can adapt as we go out.

We all struggle when the broker sits across from us and says we’re going to start on your property search, describe what you need. We have to try and give them some parameters to bring us viable solutions without filling the funnel so big. We look at warehouses and distribution centers to be adapted. We did a couple of chip fab facilities that were able to convert into a data center relatively easily. It’s going to be faster to pull permits on an existing facility that’s there today. Again, speed to market is important.

Greenfields vs Brownfields – Data Center Development Strategies

Craig Deering: When I’m building greenfield, I have to cover a spectrum of expectations and requirements. If someone can really focus their expectations and have a good understanding of what their IT kit is going to look like, they can take advantage of some well-located properties. To the point when we’re doing greenfields now, we’re looking at a range of densities to cover a range of customers. There’s still a good amount of workload in that 100 kilowatt to 150 watt per square foot density. And that leads to floor loadings, where you can consider some office properties — with the right strategy. So having a strong cleaver about what your core technical requirements are opens up a lot of possibilities in the adaptive reuse market.

Michael Silla: The modularization of our MEP infrastructure makes a big difference as you look at a brownfield and the surrounding property – looking at where you can place that on the outer perimeter of the building. That opens up opportunity to us, when you start to bring that type of infrastructure. Building plants inside of the building complicates it a little more as well.

Craig Deering: If you have a flex office building – a one story with a 40 by 40 structural grid and 16 foot clear, that’s the kind of criteria where you need a strong cleaver, and then you can focus on the relationship between location and proximity to your end users, your fiber resources, and your power resources. Then you can make a pretty quick decision to go forward. At that point the property is fully entitled and you’re just working with building permits, which certainly gets you down into the desired six to seven month delivery time frame.

What Are the Challenges of Converting Office Spaces to Data Centers?

Eric Jensen: So there’s a couple of really important considerations there. There’s the modularization side of things as it relates to design and selection of site, but then there’s also the type of facility. You’re talking right now about availability of office as a prospect, and right now there’s speculation in the real estate world that there’s going to be an increase in the availability of office space as a result of the [COVID-era] environment that we’re living in. And so I’m curious for the panel, maybe we can get everybody to weigh in on office space as a consideration. What are the pros and cons? I think we’ve touched on some of them, but if you feel as though that either connectivity or power availability makes an existing office space, what seems to be the best available option for you right now? How, then, do you overcome some of those concerns you had when originally thinking, ‘well, I’d much rather have a warehouse or an existing manufacturing facility?’

Mitchell Fonseca: I think office spaces are definitely environments that are doable. We see some of the biggest data centers in the world that are converted office spaces. I think office spaces are a lot more challenging because they normally don’t have the clear heights required to be able to cool environments to the levels that we need. To Craig’s point earlier, if you’re looking at more of a retail environment, that’s going to be usually a lower density from a lot of the hyperscalers’ retail environments, which tend to fit better within that office space environment. It’s probably the model for more of an enterprise type of solution. If you have an enterprise that’s going to build out their own data center I think that it’s more doable within that realm. Once you start getting into the high performance computing, whether it’s high frequency trading — the new environments. And some of that stuff is not really going to work well within an office environment. So that’s where you start getting into pretty heavy modifications to the structure of the facility where it probably wouldn’t make sense. It’s really about the use case. What workloads are you targeting for that environment that are going to dictate whether an office building will work or not?

Eric Jensen: That makes a lot of sense, but it sounds to me like potentially the targeted audience makes a difference for whether office space is truly viable for you. With Flexential in particular, Mike, you’re probably sitting in bit of a unique position in that you’re starting to look at both sides of the middle of the road, as far as size of facility and targeted audience. What would tip the scales to prevent you from pulling the trigger on an existing office space?

Michael Silla: The look, the feel, the densities that we’re seeing today make office buildings more difficult to convert. In a previous role I looked at an office building in a dense area, not a lot of great opportunities. It just wasn’t going to work for a data center that focused on the mix of clients at the time.
The other challenge is that we need the slab heights for the design of record. We need that minimum height, and the reality is, I’ve heard somebody mentioned 13 feet, but that’s tough. We’re looking for a lot more height than that. Looking to the future in a second tier market where folks are trying to get close to the Edge and you’re in urban areas, it’s going to get tougher to find a property. So, depending upon business case and future, you would try and make that work. But right now…where we’re at today, that would be a tough one, trying to make the design of record work, trying to keep it standard.

But as time goes on and the data center market continues to grow, and properties become scarce…the edge comes about. So I think that’s the way we need to look at it.

You can’t rule anything out. And this is what I tell my team, if something doesn’t work today, okay, we park it over in the box here, but you never know when you’re going to go back in and revisit that. So you may wind up back to the future.

Eric Jensen: So really what you’re doing is you’re prioritizing the site over the facility itself, of course. You’re basing decisions on the primary drivers, geography, connectivity, power availability — things of that nature.

Michael Silla: If it’s the right price, it’s worth tearing down.

Modularity as a Component of Data Center Design

Eric Jensen: And so I’m curious, you had mentioned earlier that modularity is a component of the design. Can you touch on how you incorporate modularity and its place in adaptive reuse?

Michael Silla: When you look at modularity, it’s a term that’s widely used, and we’ve seen everything from fully modularized data centers to servers in a box. We like to modularize our components as much as possible. Consider a data center kind of as a product, like a vehicle coming down the assembly line. You need wheels, a steering wheel, a radio. And they’re all made in factories elsewhere and then shipped and bolted on. So if you think about our electrical and mechanical infrastructure as we have designs of record, and we have standard blocks of infrastructure, you can prepackage that equipment in a factory. Then it’s shipped as needed to a location.

And for all intents and purposes, it’s bolted on or assembled to the box being the data center. So your facility, your building is your data center and your infrastructure sits on the outside, or is skidded to the interior of the data center, but sits on the outer perimeter. Longevity is looking at life cycles of data centers; we’ve been through multiple generations, where the rush was to get product to market. You build it a certain way, and we’re finding that as you go back to do upgrades on those facilities, it’s a little tougher and an invasive open heart surgery. Whereas our cooling units 750 KW, you can remove it. If you need a 1200 KW unit, then you replace it with that, the same thing with your infrastructure.

It’s easier to adapt the facility long-term when your infrastructure is sitting on the perimeter of the facility versus trying to do open heart surgery inside. Engineering DesignAnd, you know, we think about that when we’re in design today. And when you’re approaching your concept, your hear everything from ‘Hey, my operator’s going to go in there every morning. And how does he park his car, walk into the building and clear security, go to work? Sales bring prospects, walks them through the facility. Eventually those prospects become clients. How do they go in and function? How do our equipment vendors come and do maintenance on the equipment? How does the fuel truck deliver fuel to the site?’ We’ve put a lot more thought over the past couple of years on the future of these facilities, because some of the facilities that we’ve built are limited to maybe retail because of the characteristics of the envelope and the ceiling heights and the floor heights. But as we’ve moved to the more dynamic data centers at the higher densities that we operate today, we have people still operating it at low to medium density and extreme high density within the same environment. And so we have to put a lot more thought into that as we design the modularized components.

Eric Jensen: So future-proofing, of course, is the panacea. You’ve got to be able to see the future in order to do that. It’s certainly no easy task. But I think modularity has a place, whether you’re thinking about a containerized solution, or a power or mechanical skid of some kind. Craig, I’m wondering, is there also potential to use modularity to go vertical?

Craig Deering: Of course there is; we’re doing it. We are doing it with our designs in Europe, but those are greenfields. I have looked at parking garages in urban locations and solved that problem. I’ve also looked at three to five story suburban office buildings and in order to go vertical, we’ve gotten away from talking about modules. We talk about provisioning and provisioning towards end user density. When we laid out my last project, which was a warehouse conversion, it was on a provisioning range depending on whether it went enterprise or hyperscale — because you’re talking about a hundred watt per square foot swing, or even more between the two users. So on that site, depending on how the building sells and gets provisioned, I can go up to about 24 megawatts and 250 watts per square foot.

Based on our topology, I think we’re somewhere around 16 megawatts in 150 watts per square foot. Plug and play flexibility is key. We look at a building like a glider kit, and if you know what a glider kit is, you know it’s a car you buy that comes with no power train, and you put in whatever power train you want, and that’s sort of the concept that you can use with adaptive reuse. And it’s also great on scaling-in a user. We have a lot of high volume users, but they do still ramp in. One of the advantages we’ve had in doing adaptive reuse is that we can get an end user in very quickly, at very low cost for that initial deployment. And this is the advantage of not having a site plan to file in that time frame. Through a series of incremental building permits in adaptive reuse, we stage in all the capacity.

A very effective prototype when we’re doing adaptive reuse, whether it’s retail warehouse office or a single story building…if I have the right setback so that I could develop the yard space to get all of the chillers, we use air cooled chillers and all of the generators on ground. It’s a very effective delivery method. Because I have this space around the building, as Mike says, just to stack the capacity on an as-needed basis.

Is Designing Data Centers to Support 5G Latency Important?

Eric Jensen: Do any of you have smaller urban data centers in design right now to support evolving 5G latency issues? Or are you already starting to build them?

Craig Deering: I’m not aware of anything we’re doing in response to 5G latency. We do have urban data centers but some of those are legacy facilities, but nothing in my region that I can speak to.

Mitchell Fonseca: We have a number of urban data centers, but we’re not really building or currently planning for an Edge use case that’s more specific to 5G.

Eric Jensen: Typically, by what factor does power demand increase when you convert an asset such as an office building to a data center, 5X, 10X or more?

Mitchell Fonseca: It’s usually significantly more than that. When we’ve had to convert buildings, we’re normally stripping out the entire power infrastructure and transformers and everything, and kind of rebuilding those from scratch. I don’t know if anybody else has a different experience, but it’s normally significantly more than 10X.

Craig Deering: So typically, if you’re picking up an office block, you’re going to be three to five megawatts on a service and you’re going up 20, 30, 40, 50, 60, depending on scale. The smallest facility I have is probably a 12 megawatt facility and we up the service from 2MVA to about 19MVA on day one to build that.

I would say it’s easily on a small facility 10Xs and if it’s a good property and you’re using the existing facility as a core, you could be looking at 20, 30Xs — depending on your ultimate development plan.

What is the Impact of Existing Power Supply or Infrastructure on Data Center Conversion?

Eric Jensen: How much does existing power supply or the building structure impact the possibility of the conversion to a data center? For example, let’s take a life science building that has a larger power supply versus an office building; does that matter much? Or if you need to be in a specific geography, are you just going to build it out?

Michael Silla: The big key is working with the power company to see what they can actually get you. Because at the end of the day, we’re selling power, right? If we’re doing our jobs well, we’re going to have excess space, but we’re going to run out of power first and that’s the game that we’re in.

Craig Deering: Yes, as far as that goes, even a question we ask when we’re looking at an existing facility is what’s at the street? So that’s a question to the power company and then, what’s at the nearest substation because we’re typically looking five to 10 years, or ramping into an ultimate power load at five to seven years. We want to get at least 20 to 30 megawatts on day one in order to build out the first section, and then you’re looking at 60 to 100 by ultimate. And most end users are now comfortable with sourcing from one substation. It’s the rare customer that is asking us for diverse substation feeds. Data centers don’t need to be the fortresses they used to be 25, 30 years ago, because the resiliency is now in the network and the information. That’s how i’s managed; it’s not in the facility.

Michael Silla: It’s rare that we have an RFI or client looking for that and when you start asking, well, why are you looking for this? They always point to the uptime and you say, well, that’s even been relaxed if you actually read it. It’s a matter of having that conversation. It’s just probably something that’s been on an RFP that’s been floating around for two decades.

Eric Jensen: I think the question centered around life sciences as an example. I think you have experience in converting chip fab, Mike. Is that right?

Michael Silla: Yes. Life Sciences or other industrial spaces are definitely viable options indeed and there is power at the street with those facilities. But then again, when you start looking at facilities of that size, we’re going to want a 36 megawatt or larger future capacity in there.

Craig Deering: Let me just add one thing though, because it’s important that everybody understands. If you’re adapting a building, I don’t care how big the power services, you’re not keeping the switchboard. You’re not keeping any of that source material. Because it just doesn’t work for a data center use; it’s not set up correctly. There’s never going to be a position where you’re adapting an existing incoming service. You’re going to originate it back out at the street and you’re going to be looking for a property that has a substation that has a double ended connection to the transmission system. And those are the key things to look for.

How is Airflow Management is Key to Operating Data Centers?

Eric Jensen: As an example of power challenges…Data Aire saw a lot around the One Wilshire project we did in Los Angeles. Power utilization was the primary driver for modernizing and centralizing the whole power and cooling infrastructure there. Also partly because it had just kind of slowly evolved over the decades which is part of what you are going to wind up finding in any kind of legacy type of office space, if it’s of any size and of any age. For anyone reading, adaptive reuse was described as open heart surgery. That’s not a mischaracterization, but there’s also a lot of people who are alive today because of open heart surgery. We’ve seen plenty of folks who need to go into office or warehouse space that is strategically located on the smaller scales. So medium to medium/small types of facilities that have substantial considerations for airflow management. Airflow management becomes the number one thing that you have to be thinking about, which is really what the gentlemen here have been talking about with the importance of those clear-heights. Some additional thoughts…if you’re not going to get the clear heights, then it’s critical that you really pay attention to how you are managing your airflow. Are you going to get the delivery of cold air where you need it to go? And how are you routing everything, whether it’s piping or layout of the infrastructure relative to the ITE, etc.?

Michael Silla: And to add on…number one, make sure your operations team is in the room during design and CFD analysis — and not the perfect environment, but the type of environment that you’ll actually be operating in. Instead of focusing on a perfect environment, we all operate in imperfect environments. Doing that during the conceptual stage will help you because airflow management is the key to operating the data center. We will always be managing that airflow and that’s the key to success; very important facet for our operators.

Eric Jensen: I think it’s a mindset — thinking about it as part of a pre-commissioning activity. Run through a couple of what-if scenarios, what-if I didn’t have that? Or what-if that occurred?

Craig Deering: There’s a user called Power Loft and they had an interesting concept which I’ve looked at adapting for office use where all of the air handling equipment was on one floor and the data center was on the other floor, and they supplied everything from below. And so, if you’re dealing with like low heights, you know if you start getting, and if you’re going to stay with an air base system, right, you can get very creative, particularly with the site source airflow designs, which a lot of people use now, you know, the fan wall design. If you start looking at using space creatively to move large bodies of air, but of course, if you are going with more of a liquid approach, you know, either this was just posted about the barge data centers, but you can certainly use the cooling concept in an office where you really just make the decision that you’re going to do rear door heat exchangers, you know, universally or in rural cooling.

And that opens up a lot of adaptive reuse opportunities. And you can actually mix those with space cooling which is actually something I used to do a lot 35 years ago in order to address, you know, kind of a comprehensive cooling scenario. So find yourself a creative engineer and be laser focused on what your operating parameters are going to be and I think you can go out there and find a lot of buildings in good locations at good prices that can work for your need.

Mitchell Fonseca: I would add the biggest challenge you mentioned there is at what point does it not become economically feasible? So we have amazing engineers in the data center world and some of the stuff that we can do is pretty mind blowing. The challenge is how much are you willing to spend to make that specific building usable? It’s always doable, right? It’s just, how much did it cost you? So that’s where there’s always a fine line between, is it economic or is it doable? And I wouldn’t say that in a lot of case, when you start talking about clear heights is where you start getting into having really unique cooling to make it work. It’s just, is it really worth it?  So there might be a specific reason why you have to stay in that building and that structure. And again, it is doable.

What’s driving Data Aire to provide more efficient and flexible precision cooling systems to the Data Center market?

Exploring the Growing Relationship – Prefab/Modular Data Centers and Precision Cooling

They say marriage is a partnership – one built on trust, flexibility and shared goals. If that’s the case, precision cooling manufacturers and prefab data center or power module designers are a perfect match.

Together, these companies look for ways to seamlessly marry their solutions to meet the goals of the end user. They look to current technology advancements to help guide their strategy recommendations, as well as rely on the tried and true solutions that have supported the data center industry for years.

Let’s start with why some companies choose prefab modular systems. Sometimes geography dictates, or owners’ varied data center strategy lends itself to prefabrication. End-users sometimes find it more convenient to drop in a modular data center because shifting highly complex mechanical projects from the construction site to a controlled production environment may be more cost efficient, safer and faster. So, in these instances, building owners seek manufacturers of modular systems who focus on using all available white space for rack capacity and then collaborate with precision cooling experts like Data Aire.

What’s Driving the Increased Trend Toward Prefab Modular Solutions?

One trend we see is the increase in 4G penetration and an upcoming 5G wave that is further motivating telecom vendors to invest in the modular data center market. And Hyperscalers are deploying large, multi-megawatt modular solutions while others are deploying a single cabinet (or smaller) at the cell tower for 5G. With the growing number of connected devices, the distribution of high-speed data must get closer and closer to those devices at The Edge. That challenge is well-suited for modular solutions. One such solution provider is Baselayer/IE.

Keep in mind, the build-to-suit trend for modular solutions shifts significant portions of the project scope to the factory environment while allowing the site construction scope to run in parallel. And the build plan dictates that precision cooling design and installation needs to be at the front end of the build cycle, so it’s important to source a manufacturer that can develop build-to-suit solutions with speed to market. As an example: “When Baselayer/IE  cut its 180-day build cycle down to a mere 90 days, it needed a partner who could make that transition with it. Data Aire has been able to build systems for our custom data centers faster than installation required,” according to Mark Walters, Director of Supply Chain and Logistics for IE.

More Industry Drivers

Speed to market is not the only driver for implementing prefab modular data centers. Other primary drivers are flexibility in design/capacity, scalability, standardization, IT equipment lifecycles, and trying to stay ahead of the exponential growth in technology. Prefabricated and modular solutions can be scaled up in size “as and when” necessary, which allows operators to stage Capex investment over time. This also avoids the risk of construction projects taking place in “live” data centers. In order to streamline and improve operational efficiency, many operators are looking to standardize their data center portfolios. Prefabricated and modular solutions offer operators a common platform across their portfolios.

It is difficult to build mission critical facilities for technology that has not yet been invented. Because tech life cycles can be as rapid as 18 months, an adaptable solution is custom designed, prefabricated and modular design. The growing use of internet services and connected devices due to AI, IoT, cloud services, etc. is accelerating the demand for smaller data centers at the edge, which is ideal for prefabricated and modular solutions.

Prefab/Modular Data Center Options

While there is a spectrum of prefab/modular solutions, one size or shape doesn’t fit all. ISO shipping containers can provide a readily available, cost effective shell; however, they have fixed designs and are space-constrained — potentially limiting the number of racks that can be deployed. Depending on the end use, ISO containers may also require significant modifications for proper environmental management.

Where ISO containers don’t meet the needs, purpose-built modules like those from Baselayer/IE, afford operators adequate space to maintain or swap equipment within the racks and manage the environment. These module-based solutions are more flexible and can be combined to deliver infinitely configurable open white space.

Scaling to Data Center Customers’ Needs

No matter the space in question, it’s important to sit-down with customers and make them part of the planning equation — discussing their current and future density requirements as well as cooling strategies (whether chilled water, economizers or multiple CRAC units). It’s about partnering and understanding short and long-term goals — making sure to provide maintainable solutions for the end user.

And today, we’re living in an interesting time, when data centers (in the US and other parts of the world) are now considered essential businesses by governments. Being able to adapt technology to the ever-growing needs of data center owners is driving manufacturers to be more agile and develop scalable, built-to-suit solutions. The flexibility of design is imperative to provide the customer exactly what they need, whether for the space, the critical infrastructure, or the IT architecture. And when it comes to modular/prefab designs, prioritizing cooling strategies has become a crucial piece of the puzzle.

 

About Baselayer/IE

With more than 200 MWs of modular capacity currently deployed, paired with four engineering, manufacturing and testing facilities across the United States, Baselayer/IE Corporation is emerging as an industry leader. We design, engineer and manufacture turnkey modular solutions entirely in house.  By controlling the entire process, we quickly adapt to the daily challenges inherent to large-scale construction projects and achieve our customers’ aggressive deadlines. Learn More.

DX Cooling HVAC System

When you’re looking to find the best precision cooling equipment for the environmental control infrastructure of your facility, you will quickly find that precision cooling equipment comes in two main categories: direct expansion system (DX) or chilled water air conditioning system (CHW).

Typically, the decision regarding which cooling source is better for a data center is usually driven by the job site conditions. However, selecting the right HVAC system for your mission critical facility can be a challenging process driven by many factors.

While a DX system is the HVAC air conditioning unit most commonly used for residential buildings or small commercial buildings, it is also selected to control a data center’s environment. Read on to learn the basics of choosing DX units for your environmental control.

What’s the Difference Between DX Units and Chilled Water Units?

The immediate and most noteworthy difference between these two systems is that the DX units cool air using refrigerant and CHW units cool the air utilizing chilled water.

A DX unit uses refrigerant-based cooling and cools indoor air using a condensed refrigerant liquid. Direct Expansion means that the refrigerant expands to produce the cooling effect in a coil that is in direct contact with the conditioned air that will be delivered to the space. The DX unit uses a refrigerant vapor expansion and compression cycle to cool air coming in through a supply plenum and returns it to the area that needs cooling through the return.

This central air conditioning system comes in either a split-system or a packaged unit. In a split system the components are separated with the evaporator located in an indoor cabinet and the compressor and condenser located in an outdoor cabinet. A packaged unit has the entire cooling system self-contained in one unit, with the evaporator coil, condenser, and compressor all located in one cabinet. This allows for flexibility in the installation since the unit can be either outside or indoors (depending on system specifications) without too large of a footprint.

The Benefits of a DX System

Flexible Application

Colocation Server RoomDX systems offer a high degree of flexibility providing precision cooling at varying load conditions. The system can be located inside or outside the building and the system itself be expanded in order to adapt to new building requirements or size. Individual sections can be operated without running the entire system in the building. The DX valve can reduce or stop the movement of the refrigerant to each indoor unit. This results in the ability to control each room independently. DX systems may occupy less space than other cooling systems. If there are large air conditioning loads, then multiple units can be installed. In cases where there is lesser heat load, one of the units can be shut down and the other can run at full load to accommodate varying load conditions.

Installation Costs

DX units (with their condensers) are complete systems, which are not reliant on other sets of equipment like cooling towers and condenser water pumps. Because they do not require additional equipment or systems, they come with lower installation costs. Chillers utilize external cooling towers to transfer heat to the atmosphere, and these structures can cost more to build, and they utilize valuable real estate which adds to the cost. The extra parts and equipment in water-cooled chillers also make installation more complicated which can mean higher upfront costs and higher labor costs for installation. CHW units also require a separate mechanical room to house the system to ensure the chiller will function properly with its cooling tower and extra components.

Good Relative Humidity Control

Dehumidification is very manageable with DX units; with low refrigerant temperatures you can pull moisture out of the easily. An increase in wet-bulb temperature corresponds with increased operating costs as well as lower comfort levels due to the higher relative humidity. In climates with high prevailing humidity, air cooled systems are good at extracting moisture from the air.

As energy savings is increasingly becoming a major issue in data centers, it’s important to make an informed decision in selecting your environmental control system. At Data Aire, we manufacture the widest variety of computer room air conditioners and air handlers to meet the demanding challenges of today’s most mission critical environments. Whether you need a comprehensive cooling system or need to upgrade your current equipment, we carry an extensive catalog of solutions to meet your thermal control needs.

Chilled Water Cooling System

In existing buildings, you may not think you have a choice regarding chilled water (CHW) vs. direct expansion (DX) systems, but read on, what you learn may surprise you.

Typically, the decision regarding which cooling source is better for a data center is usually driven by the job site conditions. If a chiller plant is available, that may be the right option. If not, many use DX cooling units. Although DX systems are the type of HVAC equipment most commonly used in most residential and small commercial facilities, CHW systems offer benefits you may not have considered.

How Do Chilled Water Units Work for Environmental Control?

Chilled-water cooling systems or chillers work by pumping cool water throughout the building. Chillers work much the same way as direct expansion systems work, except that they use water in the coil in place of the refrigerant. Basically, the water is cooled down to roughly 40 degrees, then channeled through a network of pipes installed inside the building.

Chillers also use a vapor expansion/compression cycle for liquid refrigerant, much like the DX units. The refrigerant is continuously transformed from a liquid, to a vapor, and back again. This process cools down the refrigerant which is passed through an evaporator. Fundamentally, the function of the chilled water system is to transport the cooling fluid from the chillers, to the load terminals and back to the chillers to maintain the thermal envelope. Cool air is then transferred to the occupied spaces by terminal devices located within the building or by using coils located in air handling units. Automatic valves at these terminal devices or cooling coils provide the air temperature control. In a large commercial building, the heat absorbed by the water may be transferred to the outside air through a cooling tower.

For any commercial company (regardless of size), overhead costs, performance and safety are of the highest concern to be handled effectively. Whatever the conditions and factors to consider, an effective air conditioning system is a must for productivity and safety. We have outlined the benefits of CHW systems so you can make an informed decision when making your environmental control system selection.

The Benefits of a Chilled Water System

I know what you’re thinking — using a chilled water air conditioning system may be a more expensive installation. That may be true initially, but when you consider the total cost of ownership along with other advantages, CHW systems warrant discussion.

Greater Efficiency in Large and Vertical Applications

The air used for a cooling space with a DX system is directly chilled by the refrigerant in the cooling coil of the air handling unit. Since the air is cooled directly by the refrigerant, the cooling efficiency of the DX units is higher. Though the efficiency of the DX units is higher, the air handling units and the refrigerant piping cannot be kept at very long distances from the system since there will be drops in refrigerant pressure along the way and cooling losses. Keep in mind that a CHW system is more convenient when vertical distances are involved. Water can be pumped vertically without a problem, but vertical risers on refrigerant lines will make it difficult for oil return and harm the compressor.

Many CHW systems are found in large commercial and industrial applications that require a substantial amount of cooling because they are more cost effective and there is a reduced hazard by not having refrigerant piped all over the building.

Cost Effective: Energy Reduction and Lower Utility Bills

Energy Efficiency_Computer BoardThe higher specific heat and density of water represents design advantages in CHW systems. Water is much denser than air, having a density of 1,000 kg/m3, while air has a density of 1.225 kg/m3. Water is better at absorbing heat than air and carries more heat per kilogram, but it also uses less space than air for a given mass. If equivalent masses of air and water experience the same temperature rise, water absorbs over four times as much heat. This fundamental fact of physics means that it will always have an advantage in this regard. When chilled water is used, indoor heat can be removed with a smaller fluid mass, and hydronic piping is more compact than air ducts. Not only that, water is plentiful and cheap; eliminating the need to use costly refrigerants can contribute greatly to the overall cost savings.

Exception to this is in regions with water shortages or drought conditions. Since they use a good amount of water to initially fill up the system, CHW units may not be recommended in these regions. Some drought-stricken areas may also have restrictions on water-cooled chiller use, so check with your local ordinances.

For a specified cooling load, a CHW system normally provides a higher efficiency than an air conditioning system with only air ducts. This can be a significant advantage in commercial buildings, where the extra efficiency can yield thousands of dollars in monthly savings.

Safer Solution

CHW system may be a safer option because having refrigerant piped all over your building has inherent hazards, and CHW systems take that hazard out. Water is chemically stable, non-corrosive, non-toxic, has a higher thermal conductivity and is inexpensive. This makes it a healthier and environmentally friendly choice when compared to other fluids such as sodium chloride brines, propylene glycols, ethylene, methanol or glycerin.

Longer Lifespan and Higher Return on Investment

Another advantage to using a CHW system such as Data Aire’s gForce chilled water system is that it typically lasts longer than air cooled systems. The operational machinery for CHW systems, except for cooling towers, is typically installed in a mechanical room, basement or other interior space. This means these complex components, such as evaporators and condensers, are less exposed to the outside elements that are mounted on rooftops or in exterior locations. Less exposure to rain, snow, ice and heat can extend the lives of these components by several years. gForce-Chilled-Water CRAC and CRAH HVAC for precision coolingAdditionally, if it is well insulated, there’s no practical distance limitation to the length of a chilled water pipe.

Quiet Operation and Noise Reduction

Few things are as irritating as a noisy air conditioner. Another advantage offered by chillers is that they operate at much quieter levels than conventional DX systems. CHW units operate in a quiet manner, because there are fewer moving parts and no noise-generating mechanisms inside the walls. The flow of water through the system is less susceptible to the expansion and contraction that causes air to affect mechanical components such as ducts and vents. This degree of quietness may be important for building occupants, particularly in sensitive environments such as hospitals where noise reduction is crucial.

In summary, as energy savings is increasingly becoming a major issue in modern enterprise data centers, it’s important to make an informed decision in selecting your environmental control system. At Data Aire, we manufacture the widest variety of computer room air conditioners and air handlers for today’s most mission critical environments. Whether you need a comprehensive cooling system or need to upgrade your current equipment, we carry an extensive catalog of solutions to meet your thermal management needs.

Learn more about Data Aire’s systems. Check out our case study and video highlighting the success of a site-optimized economization solution which can provide 260 days of free cooling to tenants of the One Wilshire building in downtown Los Angeles.

Digital Revolution Cloud Data Center

Author: Eric Jensen, VP/GM, Data Aire

Sometimes we hear people comment that we should be thankful to live in such interesting times. However, can there be too much of a good thing? For example, in the tech world things are moving so quickly they’re keeping many of us on our toes and wondering what’s next! Just look at the Cloud Services Industry that is expected to grow 17.5% this year, to $214.3 billion (source). This industry barely existed 10 years ago! Similarly, the Data Center industry is poised for a lot of interesting times in the next 5 years. Hang on, because we have a crazy ride ahead of us. I thought it might be interesting to share a few insights into what is going on behind the scenes to help you plan accordingly.

Gartner & AFCOM Data Center Forecasts

Let’s first take a closer look at what the analysts are saying. According to one extreme forecast from Gartner analyst Dave Cappuccio, 80% of all enterprises will have shut down their traditional data centers by 2025, compared to just 10% in 2018 (source). If correct, there will be a radical transformation of how and where data will be stored, how it will be managed, and what equipment is needed to keep it secure and available as it flows both within and outside of your enterprise. Regardless of how true this forecast proves to be, what’s for sure is that changes are happening with both IT and facilities.

Conversely, analyses from respondents of the latest AFCOM State of the Data Center report expect meaningful increases in ALL data center measures over the next three years. Data center growth looks to be mostly on the upswing across the board. As the report indicates:

  • The average number of data centers per organization (including remote sites, computer rooms, clean rooms, and edge) is about 12. This will increase to 13 over the next 12 months and jump to nearly 17 over the next three years.
  • Respondents further indicated that, on average, more than four data centers will be built over the course of the next 12 months per organization — and nearly five more over the course of three years. And with this growth comes new requirements and demands around operational and environmental optimization.

Here are 3 factors that help explain why this transformation is occurring, to help you plan according:

1. Cloud Computing’s Impact on Data Center Management

Cloud TechnologiesIf we take a close look at this Gartner forecast, it soon becomes clear that Cloud computing is a big driver of this transformation. With the expected growth and adoption of this technology, it is changing how and where data is being stored – and how it must be managed. Further, organizations are digitally transforming their operations to become more agile so they can respond faster to change. This factor is also impacting how data centers must be managed. With more transactions now occurring outside, beyond the firewall, the concept of a “closed” computer room or data center is going away. Better collaboration will be required.

2. Growing Clout of the Big 5 Service Providers

While it might have been unclear a few years ago who the market leaders would be in the field of managed cloud services, today the top 5 providers – AWS, Microsoft, IBM, Google and Alibaba – own about half the market (source). Collectively, they’ll earn about $112 billion of revenue from this segment in 2019.

There are a couple of trends that help explain why this transition has occurred:

  • Workload placement in a digital infrastructure is primarily based on business need, so is far less constrained by physical location
  • Significant cost and software maintenance advantages exist with the cloud, which is accelerating deployment
  • As organizations increasingly execute upon their digital transformation strategies, the need to enable scalable, agile organizations has increased to remain competitive; a cloud strategy really helps to enable this transition

3. The Data-driven Organization

Data now plays an increasingly critical role with how enterprise and other organizations are run and operated. Worldwide Big Data market revenue for software and services is projected to increase from $42B in 2018 to $103B in 2027, attaining a Compound Annual Growth Rate (CAGR) of over 10% per year (source). Yet another very interesting number!

Part of what is driving this change is the need for higher performance computing capabilities, running complex applications involving very large volumes of data. A new term has emerged, “data gravity,” which can be seen as an analogy to the way that, in accordance with the physical laws of gravity, objects with more mass attract those with less. As the data that organizations are amassing gets very large, they can’t practically move it so they start hosting applications to process it in the same location. Virtual gravity is now at work, often across several locations, each running mission critical applications with expectations of zero downtime. With this wealth of knowledge and intelligence that is being collected, thanks to the Internet of Things and corporate digital transformation strategies, this data has now become a central part of decision support – starting at the strategic, corporate level. This knowledge is very valuable, as it can be used to gain competitive advantage and deliver a better customer experience.

New Pressure on Data Center Operators

Each of these three market pressures have placed a new burden on infrastructure and data center operators, which are now increasingly placed in the spotlight should connectivity or uptime issues occur. These operators must place greater focus on ensuring that service partner ecosystems are in place that best enable the new requirements of the Cloud computing revolution.

Higher performance applications, computations and queries demand more equipment to support greater data throughput – all generating more heat. As the big 5 cloud service operators continue to grow, new pressure on cost savings will encourage higher density of this equipment, creating an acute need for greater precision with managing temperature and climate conditions. Look for significant new burdens on data center controls (sequence of operation), thermal management, and facilities management overall.

Fortunately, improved engineering strategies and technologies on how to best engineer precision temperature-controlled computing environments now exist. Just as increasing sophistication has enabled the extraction of more data for smarter decision support, so too has the ability to engineer micro-settings to data center temperature ecosystems to ensure temperatures and climate conditions are rigidly adhered too. Maintaining the right operating conditions will be critical across the entire data storage and processing ecosystem – the weakest link will bring down increasingly important business processes with a very high visibility of failure, should it occur. System uptime will increasingly be required to run at perfection, given the higher number of business-critical applications now reliant upon the high value data now collected as part of every business operation.

Plan Now for Interesting Times

With budget planning set to have either just started or begin in the coming weeks, now might be a great time to take a broader perspective of your overall data storage and processing ecosystem performance. A prudent move would be to explore not only what future capabilities and higher standards can be attained internally, but to also look to how you can expand these capabilities in partnership with your service providers to ensure no weak links exist.

With hybrid data computing solutions emerging that can simultaneously take advantage of the extreme scale of the big providers while delivering localized enterprise performance with high reliability for mission critical applications, now is the time to put together an operating plan — and plan for higher performance. However the industry rolls out, the investment in precision environmental monitoring and control has been elevated as a prerequisite to sustain overall enterprise profitability.

*******************************************************************************************************************************

Computer Server Room

Cooling Techniques and New Approaches to Efficiency

There are new and emerging options around CRAC designs, condensers, fluid coolers, advanced system controls, containment solutions, various cooling systems like chilled water or direct expansion, and even process cooling equipment that’s engineered for the precision cooling of non-server or data-related spaces. Basically, designs and architecture around AFM and cooling have really come a long way.

In the past, AFM solutions may have been a ‘nice-to-have’ or even a luxury of sorts for some data centers to implement. Now, for many market reasons (data center energy usage, growing business needs, focus on green solutions), it is becoming much more of a necessity. When retrofitting or designing a data center, you no longer think twice about adding AFM solutions to improve data center efficiency. However, some organizations are still catching up to AFM requirements and best practices.

For some, the challenge is realizing the massive efficiency gains that a good airflow solution can bring.

How to Improve the Efficiency of your Data Center or Computer Room

To better understand the concept – it’s important to see how far data center environmental control systems have come. Focusing specifically on cooling and airflow, you’ll quickly see that – whether you’re deploying an edge data center or a primary colocation – there are some great options to improve management and efficiency.

  • Identifying your data center type. If you’re deploying a cooling or HVAC system into your data center – it’s important to know your exact requirements. For example, custom cleanroom HVAC systems (think labs and medical environments) can be very different than a traditional data center room deployment. Furthermore, optimizations around filtration, exhaust systems, ceiling grid architectures, and very tight temperature controls are key definitions around your data center type and requirements.
  • Calculate and understand your space. It is important to identify the type of space you have; computer room or data center. While they are both mission critical spaces, ASHRAE 90.4 Energy Standard for Data Centers[1] defines a computer room as a room or portions of a building serving an ITE load less than or equal to 10 kW or 20 W/sq. ft. or less of conditioned floor area. A data center is a room or building, or portions thereof, including computer rooms being served by the data center systems, serving a total ITE load greater than 10 kW and 20 W/sq. ft. of conditioned floor area.
  • Advanced climate controls. It’s not just about controlling airflow and temperature. The next-generation data center introduces new ways to optimize the infrastructure for even greater levels of efficiency. Now, we have precision air control, DX and chilled-water air handlers, data center cooling, process cooling, humidification, and even new types of fluid cooling technologies. All of these systems can directly impact how a data center operates and supports your business.
    • If you’re wondering what to use, DX or chilled-water, for example, know that it really all comes down to use-case. Both will have their pros and cons and both will largely depend on your needs. For example, DX Units vary in use between supplemental or emergency building AC, or primary AC at tented events or relief structures. Chiller units cool water for use in other AC systems like chilled water air handlers.
  •  Understanding heavy-duty optimizations. Sometimes your data center is hosting some intense workloads. Whether it’s for big data or a massive cloud infrastructure – there will be cases where powerful types of environmental controls are required. In some cases, it’s critical to work with direct expansion air conditioning systems and industrial air handlers to meet complex demands. There will be cases where your airflow requirements may range from 500 to over 400,000 cfm. It’s in these situations that you should have a HVAC system capable of supporting those needs.
  • Getting the packaged solution. The next-generation data center is a very diverse and distributed model. Think edge computing, cloud, and beyond. For those types of data centers that require a unique way to control their environmental variables – a packaged HVAC system could make a lot of sense. For example, you can utilize a complete floor or ceiling HVAC system with indoor or outdoor condensers — all as one package. Consolidating much of the HVAC equipment into one, integrated packaged equipment center provides a variety of benefits. These systems are all integrated, easier to service, and can be a lot quieter than traditional HVAC solutions.
  • Taking HVAC outdoors. The amazing progression around air and cooling processing has allowed the modern data center to recover new types of resources. New types of outdoor air solution products recycle waste energy from the exhaust air stream and use it to precondition the outdoor air to significantly reduce the heating, cooling and humidification loads required to maintain proper levels of temperature and humidity within occupied spaces. It’s in these cases that you begin to recycle precious resources and optimize your data center.

This blog is an excerpt from the reference guide entitled: Using Environmental Management Solutions to Build Sustainable Data-Centric Spaces.  Download the reference guide to read the the document in its entirety.

*************************************************************************************************************

[1] https://tc0909.ashraetcs.org/documents/presentations/90.4%20Marketing%20Slides%2020180626.ppsx