Data Center Cooling Hotspots

Intelligent Data Center Cooling CRAC & CRAH Controls

In an ever-changing environment like the data center, it’s most beneficial to have as many intelligent systems working together as possible. It’s amazing to think of how far technology has come from the old supercomputers the size of four filing cabinets, to the present data centers that are pushing 1,000,000 Sq. Ft.

Managing a Data Center’s Needs

Historically, managing a data center was fairly straightforward. In all the growth, we find ourselves digging into the nuisances of every little thing data center cooling, power and rack space, among hundreds of other minute aspects. This is all way too much for a Data Center Manager to be able to manage and comprehend by themselves, so implementing systems that can talk to each other has become a must.

When evaluating the cooling side of the infrastructure, there are real challenges that may make you want to consider hiring a team of engineers to monitor your space constantly.

  • Most sensible room capacities vary constantly during the first year or two years build-out.
  • This creates a moving target for CRAC/CRAH systems to hit within precise setpoints, and this can create a lot of concern by data center managers about hot spots, consistent temperatures and CRAC usage.
  • Just when you think the build-out is done, someone in the team decides to start changing hardware and you’re headed down the path of continuous server swap outs and capacity changes.

It really can turn into a game of chasing your own tail, but it doesn’t have to.

Reduce Your Stress Level

Reduce Stress with data Aire Environmental Control

Data Aire has created the Zone Control controller to address the undue stress imposed on data center managers. Zone Control allows CRAC and CRAH units to communicate with each other and deduce the most efficient way possible to cool the room to precise set-points.

No longer will you or your colleagues need to continually adjust set-points. And as previously mentioned, it’s incredibly beneficial to have as many intelligent systems working together as possible. Zone Control is a creation of open communication and dialogue between all units on the team.

CRAC & CRAH Units Should Work Together Like an Olympic Bobsledding Team

I like using a sports analogy to illustrate this idea. Just like in sports, all players on the team must know their own personal role and how all players doing their part creates the most efficient team. As I watched the 2018 winter Olympics I started thinking about the similarities between a four-man bobsled team and how CRAC/CRAH units communicate through Zone Control.

Stay with me here…the bobsled team starts off very strong to give as much of a jump out of the box as possible. Then each member starts hopping into the bobsled in a specific order. Once they assume maximum speed, all members are in the bobsled and most are on cruise control — while the leader of the team steers them to the finish line. That’s Zone Control; the leader of the team.

Personal Precision Cooling Consultant

Let’s get back to data center cooling. When the units first start ramping up – they do so to ensure enough cooling immediately. Then as our controls/logic gets the readings of the room back, these units start to drop offline in standby mode to vary down to the needed capacity of the room. They are able to talk to each other to sense where the hotter parts of the room are to ensure the units closest to the load are running. Once they have gone through this process of checks and balances to prove the right cooling capacities, they go into cruise control as Zone Control continues to steer.

This creates the most efficient and most reliable setup of cooling in each individual data center as possible. Data center managers don’t need to worry about trying to find hot spots or worry about varying loads in the room. Zone Control is an intelligent communication control that works with CRAC/CRAH data room systems to identify the needs of the space and relay that message to the team. Think of it as your personal precision cooling consultant that always has the right system setup based on real-time capacities.

Add V3 Technology to the Zone Control Team

You can go even a step further in your quest to have the most efficient environmental control system safeguarding your data center. Pair Zone Control with gForce Ultra. The Ultra was designed with V3 Technology. It is the only system on the market to include a technology trifecta of Danfoss variable speed compressors accompanied by an EEV and variable speed EC fans. gForce Ultra can vary down to the precise capacity assignments in the data room. Combine the Ultra with Zone Control and you have the smartest and most efficient CRAC system in the industry. The Zone Control even has the logic to drop all Ultra units in a room down to a 40% capacity and run as a team in cruise control, versus running half the units at 80% because of the efficiency written in the logic.

If you are worrying about your data centers hot spots and CRAC usage, give us a call and we can get you setup with the most knowledgeable cooling brains around, the Zone Control.

Do You Know Which Solution is Right for You?

One thing is certain, optimal data center design is a complex puzzle to solve. With all the options available, no one environmental control system can fit all situations. You must consider all the solutions and technology available to best manage assets and adapt your evolving data center.

There is a precision cooling system for whatever scenario best fits your current strategy or future goals. The question only remains, have you considered each of the options with your design engineer and your environmental control manufacturer? The two need to be in synch to help you maximize your return on investment.

In most instances, if you want an environmental control system that scales with your needs, provides the lowest energy costs, and provides the most reliable airflow throughout your data center, a variable-speed system is your best solution. Nevertheless, you may be curious about what other options may suit your current application.

Precise Modulated Cooling | Greatest ROI and Highest Part-Load Efficiency

Companies need to decide on their strategy and design for it. When you know you have swings in your load – seasonal, day to day or even from one corner of the data center or electrical room to the other, you should consider variable speed technology. A system with variable speed technology and accurate control design modulates to precisely match the current cooling load. This precision gives the variable speed the highest efficiency at part-load, which equates to a greater return on investment. In other words, when your data center is not running at maximum cooling load a variable speed system will use less energy and save money.

If we think of the cooling output of your environmental control system as the accelerator of a car — you can press the pedal to almost infinite positions to exactly match the speed you want to travel. You are not wasting energy overshooting your desired speed. With a well-designed control system, you also ensure a smooth response to a change in load. Further efficiency is gained by accelerating at an efficient rate for the system.

Advanced Staged Cooling | Initial Costs and Great Part-Load Efficiency

If you are looking for something that offers a portion of the benefits of a variable speed system but at a reduced first-cost, a multi-stage cooling system can be a good compromise. A multi-stage system will manage some applications well and can reduce overcooling your space — as-built today. If you need greater turndown than what a fixed speed system offers, then this is a good choice for you.

If you find this to be the right-now solution for you, you’re in good hands. The system is more advanced than a fixed speed unit; it is developed with a level of design optimization to transition in small steps. Unlike digital scroll, this accurate solution, with two-stage compressors, has a high part-load efficiency.

Think about the car accelerator example again; there are many positions to move the accelerator to with a multi-speed system. With two-stage compressors the positions are precise and repeatable, meaning you can smartly change positions to prevent overshoot, and you are more likely to have a position that matches the speed that is desired.

Although the return on investment is better with a multi-stage than a fixed-speed system; the benefits are less than with a variable speed system.

Fixed-Speed Systems | Lowest Initial Cost and Lower Part-Load Efficiency

Some consider the entry point for precision cooling based on their current budget constraints. So, if you are on a tight budget and need a lower first-cost, then a fixed-speed, single-stage precision cooling system may get the job done. However, this can be short-sighted as energy consumption and costs are higher when the data center is operating at less than the maximum designed cooling load. In our experience, this seems to happen quite frequently based on what the mechanical engineer has been asked to design vs. the actual heat load of the space.

If a fixed system is applied to the car accelerator example, you see how only applying 100% throttle or 0% throttle would prevent you from getting close to a precise speed. This is clearly not as efficient as the other examples unless you want to go at the car’s maximum speed all the time.

Ramping Up Your Data Center

The needs and goals of a data center can change over time. While the initial objective may only require getting the space in running order, customers may reassess based on changing scenarios. If your data center needs to scale, you may be challenged if you haven’t planned ahead with your design engineer for phased buildouts, or perhaps even varying IT load considerations that are seasonal or shift from day to day, or even hour to hour. Likewise, you may need to consider the difference between design and actual usage – whether it be too little or too much. Perhaps your IT team says they need two megawatts, or you are going to be running at 16 kw per rack. The cooling system designed may underserve your needs or may be overkill for the current state of usage. In addition, pushing your system to do more than it is engineered for can potentially accelerate the aging of your infrastructure.

Again, depending on your application, goals and business strategy, one of these three systems is right for you. The best course of action is to evaluate where you are today and then future-proof your data center with technology that can grow with you if necessary.

Nothing says it better than an infographic.

This article was originally published in Data Center Frontier.

 

 

What is the current state of data center rack density, and what lies ahead for cooling as more users put artificial intelligence to work in their applications?

For years, the threat of high rack densities loomed yet each passing year saw the same 2-4kW per rack average.  That’s now nudging up.  While specific sectors like Federal Agencies, higher education, and enterprise R&D are certainly into high performance computing with 20, 80, or even 100kW per rack, the reality today remains one of high[ER] density in the realm of 8-12kW per rack (see Uptime Institute’s global survey 2020).  Cooling higher densities doesn’t mean over building at risk of stranded capacity for parts of the year.  The answer is load matching via software that can respond accordingly and the infrastructure hardware to support it.

Many industries are experiencing difficulty finding enough skilled workers. What’s the outlook for data center staffing, and what are the key strategies for finding talented staff? 

Cleveland Community College Case StudyData Center staffing is as challenged, if not perhaps more so, than many industries.  As the world becomes increasingly complex – perhaps more accurately, specialized – specific skill sets become more precious.  This challenge hits datacom at all levels – from design and construction to operations and maintenance.  Amazon can pop up a distribution center in rural locales and train an unskilled workforce to perform its warehousing activities.  A cloud data center going up in remote locales needs far fewer workers, but the total available skills versus those needed per capita are much more scarce.  The good news is there are organizations working hard to fix this.  Cleveland College in North Carolina, for example, developed a first-of-its-kind curriculum for Mission Critical Operations in conjunction with 7×24 Exchange.  7×24 Exchange with its Women in Mission Critical, is also leading the way in bringing diversity to the datacom sector to enrich as well as increase the pool of candidates.  Ten years ago, the average high school or college grad didn’t know what a data center was.  Through industry, and now educators’ efforts, that’s beginning to shift.

How have enterprise data center needs evolved during the pandemic? What do you expect for 2021?

The pandemic was an immediate stress test on IT – on the hardware and the software, both distributed (ie: users) and the data center.  Many enterprises were, understandably, caught off guard.  One of the most basic impacts was trying to make up for users’ connectivity challenges as much as possible at the applications and at the data center.  Anything that could be done at the architecture to improve operational efficiency was needed to improve the UX.  One interesting thing to watch over the next 1-2 years might be how the enterprise architecture may change in response to a more distributed workforce long-term as many larger organizations are choosing not to return to the office.  That’s leading many to relocate because an office commute is no longer a consideration.  Does the large enterprise’s need start to look more like average consumer consuming or computing cloud content?  More immediately, enterprises have quickly sought to refresh their infrastructure or just shore it up with a bit more failsafe – the old, ‘we can’t control the universe but we can control our response to it.’

Edge computing continues to be a hot topic. How is this sector evolving, and what use cases and applications are gaining the most traction with customers? 

The edge moves and changes shape.  Maybe always will.  High tech manufacturing and healthcare are two places the edge is evolving.  High tech manufacturing and warehousing is adopting more autonomous robotic operation needing to be updated and to learn in situ.  As healthcare becomes more digitally-oriented, whether because of the connected devices in a modern healthcare setting or the adoption of telehealth, firmware and applications need to be reliably robust, and secure in the healthcare provider’s hands.

How is density, efficiency & economy of scale entering the conversation?

 

Few data centers live in a world of ‘high’ density, a number that is a moving target, but many are moving to high[er] density environments. Owners of higher density data centers often aren’t aware of how many variables factor into cooling their equipment. The result is that they spend too much on shotgun solutions that waste capacity when they would be better served by taking a rifle shot approach. This means understanding the heat dispersion characteristics of each piece of equipment and optimizing floor plans and the placement of cooling solutions for maximum efficiency.

So, how do you invest in today and plan for tomorrow? By engaging early in the data center design process with a cooling provider that has a broad line of cooling solutions, owners can maximize server space, minimize low pressure areas, reduce costs, save on floor space and boost overall efficiency. And by choosing a provider that can scale with their data center, they can ensure that their needs will be met long into the future.

Density is Growing: Low to Medium to High[er] and Highest

Data centers are growing increasingly dense, creating unprecedented cooling challenges. That trend will undoubtedly continue. The Uptime Institute’s 2020 Data Center survey found that the average server density per rack has more than tripled from 2.4 kW to 8.4 kW over the last nine years. While still within the safe zone of most conventional cooling equipment, the trend is clearly toward equipment running hotter, a trend accelerated by the growing use of GPUs and multi-core processors. Some higher-density racks now draw as much as 16 kW per rack, and the highest-performance computing is demanding typically up 40-50 kW per rack.

High[er] Density Requires Dedicated Cooling Strategies

For the sake of discussion, let’s focus on the data centers that are, or may be, in the 8.4-16 kW range in the near future. This higher density demands a specialized cooling strategy, yet many data center operators waste money by provisioning equipment to cool the entire room rather than the equipment inside. In fact, “Over-provisioning of power/cooling is probably more common issue than under provisioning due to rising rack densities,” the Uptime survey asserted.

No two data centers are alike and there is no one-size-fits-all cooling solution. Thermal controls should be customized to the server configuration and installed in concert with the rest of the facility, or at least six months before the go-live date. Equipment in the higher density range of 8-16 kw can present unique challenges to precision cooling configurations. The performance of the servers themselves can vary from rack to rack, within a rack and even with the time of day or year, causing hotspots to emerge.

Higher-density equipment creates variable hot and cool spots that need to be managed differently. A rack that is outfitted with multiple graphic processing units for machine learning tasks generates considerably more heat than one that processes database transactions. Excessive cabling can restrict the flow of exhaust air. Unsealed floor openings can cause leakages that prevent conditioned air from reaching the top of the rack. Unused vertical space can cause hot exhaust to feed back into the equipment’s intake ducts, causing heat to build up and threatening equipment integrity.

For all these reasons, higher-density equipment is not well-served by a standard computer room air conditioning (CRAC) unit. Variable speed direct expansion CRAC equipment, like gForce Ultra scales up and down gracefully to meet demand. This not only saves money but minimizes power surges that can cause downtime. Continuous monitoring should be put in place using sensors to detect heat buildup in one spot that may threaten nearby equipment. Alarms should be set to flag critical events without triggering unnecessary firefighting. Cooling should also be integrated into the building-wide environmental monitoring systems.

Working Together: Density, Efficiency and Scalability

 

A Better Approach to Specifying Data Center Equipment

The best approach to specifying data center equipment is to build cooling plans into the design earlier.  Alternating “hot” and “cold” aisles could be created with vented floor tiles in the cold aisles and servers arranged to exhaust all hot air into an unvented hot aisle. The choice of front discharge, up flow and down flow ventilation can prevent heat from being inadvertently circulated back into the rack. Power distribution also needs to be planned carefully and backup power provisioned to avoid loss of cooling.

Thinking through cooling needs early in the data center design stage for higher density data centers avoids costly and disruptive retrofits down the road. The trajectory of power density is clear, so cooling design should consider not only today’s needs but those five and 10 years from now. Modular, and variable capacity systems can scale and grow as needed.

The earlier data center owners involve their cooling providers in their design decisions the more they’ll save from engineered-to-order solutions and the less risk they’ll have of unpleasant surprises down the road.

Whitepaper to learn about the Department of Energy's (DOE) current standards for the efficiency ratings of a CRAC

If you’ve ever done anything even remotely related to HVAC, you’ve probably encountered ASHRAE at some point. The American Society of Heating, Refrigerating and Air-Conditioning Engineers is a widely influential organization that sets all sorts of industry guidelines. Though you don’t technically have to follow ASHRAE standards, doing so can make your systems a lot more effective and energy efficient. This guide will cover all the basics so that you can make sure your data centers get appropriate cooling.

What Are the ASHRAE Equipment Classes?

One of the key parts of ASHRAE Data Center Cooling Standards is the equipment classes. All basic IT equipment is divided into various classes based on what the equipment is and how it should run. If you’ve encountered ASHRAE standards before, you may already know a little about these classes. However, they have been updated recently, so it’s a good idea to go over them again, just in case. These classes are defined in ASHRAE TC 9.9.

  • A1: This class contains enterprise servers and other storage products. A1 equipment requires the strictest level of environmental control.
  • A2: A2 equipment is general volume servers, storage products, personal computers, and workstations.
  • A3: A3 is fairly similar to the A2 class, containing a lot of personal computers, private workstations, and volume servers. However, A3 equipment can withstand a far broader range of temperatures.
  • A4: This has the broadest range of allowable temperatures. It applies to certain types of IT equipment like personal computers, storage products, workstations, and volume servers.[1]

Recommended Temperature and Humidity for ASHRAE Classes

The primary purpose of ASHRAE classes is to figure out what operating conditions equipment needs. Once you use ASHRAE resources to find the right class for a specific product, you just need to ensure the server room climate is meeting these needs.

First of all, the server room’s overall temperature needs to meet ASHRAE standards for its class. ASHRAE standards always recommend that equipment be kept between 18 to 27 degrees Celsius when possible. However, each class has a much broader allowable operating range.[1] These guidelines are:

  • A1: Operating temperatures should be between 15°C (59°F) to 32°C (89.6°F).
  • A2: Operating temperatures should be between 10°C (50°F) to 35°C (95°F).
  • A3: Operating temperatures should be between 5°C (41°F) to 40°C (104°F).
  • A4: Operating temperatures should be between 5°C (41°F) to 45°C (113°F).[1]

You also need to pay close attention to humidity. Humidity is a little more complex to measure than temperature. Technicians will need to look at both dew point, which is the temperature when the air is saturated, and relative humidity, which is the percent the air is saturated at any given temperature.[2] Humidity standards for ASHRAE classes are as follows:

  • A1: Maximum dew point should be no more than 17°C (62.6°F). Relative humidity should be between 20% and 80%.
  • A2: Maximum dew point should be no more than 21°C (69.8°F). Relative humidity should be between 20% and 80%.
  • A3: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 85%.
  • A4: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 90%.[1]

Tips for Designing Rooms to Meet ASHRAE Data Center Cooling Standards

As you can see, ASHRAE guidelines are fairly broad. Just about any quality precision cooling system can easily achieve ASHRAE standards in a data center. However, a good design should do more than just consistently hit a temperature range. Planning the right design carefully can help reduce energy usage and make it easier to work in the data center. There are all sorts of factors you will need to consider.

Since most companies also want to save energy, it can be tempting to design a cooling system that operates toward the maximum allowable ASHRAE guidelines. However, higher operating temperatures can end up shortening equipment’s life span and causing inefficiently operated technology to use more power.[3] Carefully analyzing these costs can help companies find the right temperature range for their system.

Once you have a desired temperature set, it’s time to start looking at some cooling products. CRAC and CRAH units are always a reliable and effective option for data centers of all sizes. Another increasingly popular approach is a fluid cooler system that uses fluid to disperse heat away from high temperature systems. Many companies in cooler climates are also switching to environmental economizer cooling systems that pull in cold air from the outdoors.[3]

Much of data center design focuses on arranging HVAC products in a way that provides extra efficiency. Setting up hot and cold aisles can be a simple and beneficial technique. This involves placing server aisles back-to-back so the hot air that vents out the back flows in a single stream to the exit vent. You may also want to consider a raised floor configuration, where cold air enters through a floor cooling unit. This employs heat’s tendency to rise, so cooling air is pulled throughout the room.[4] By carefully designing airflow and product placement, you can achieve ASHRAE standards while improving efficiency.

Data Aire Is Here to Help

If you have any questions about following ASHRAE Data Center Cooling Standards, turn to the experts! At Data Aire, all of our technicians are fully trained in the latest ASHRAE standards. We are happy to explain the standards to you in depth and help you meet these standards for your data room. Our precision cooling solutions provide both advanced environmental control and efficient energy usage.

 

 

References:

[1] https://www.chiltrix.com/documents/HP-ASHRAE.pdf
[2] https://www.chicagotribune.com/weather/ct-wea-0907-asktom-20160906-column.html
[3] https://www.ibm.com/downloads/cas/1Q94RPGE
[4] https://www.simscale.com/blog/2018/02/data-center-cooling-ashrae-90-4/

data center cooling

It’s vital to keep your data center environment optimal to promote peak performance.

Data center cooling is a $20 billion industry. Cooling is the highest operational cost aside from the ITE load itself. It’s also the most important maintenance feature.

There are a few data center cooling best practices that can keep your data center humming along smoothly. These practices can help you to improve the efficiency of your data center cooling system. They can also help you to reduce costs.

It’s important to execute any changes to your data center cooling system carefully. For this reason, it’s vital to work with an experienced engineer before making any changes in a live environment.

To learn more about data center cooling best practices, continue reading.

The State of Data Center Environmental Control

Today, data center environmental control is one of the most widely discussed topics in the IT space. Also, there’s a growing discrepancy between older data centers and new hyperscale facilities. Despite age or scale, however, power utilization and efficiency are critical in any data center.

It’s well-known that data centers are one of the largest consumers of electricity around the world. Today, data centers used up to 1% to 1.5% of all the world’s energy. What’s more, energy usage will only increase as more innovations emerge. These innovations include:

  • Artificial intelligence
  • Cloud services
  • Edge computing
  • IoT

Furthermore, these items represent only a handful of emerging tech.

Over time, the efficiency of technology improves. However, those gains are offset by the never-ending demand for increased computing and storage space. Firms need data centers to store information that enables them to satisfy consumer and business demands.

Accordingly, data center power density needs will increase every year. Currently, the average rack power density is about 7 kW. Some power racks have a density of as much as 15 kW to 16 kW per rack. However, high-performance computing is demanding typically up 40-50 kw per rack.

These numbers are driving data centers to source the most energy efficient cooling systems available.

What Is the Recommended Temperature for a Data Center?

The American Center for Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) offers an answer to this question. ASHRAE suggests server inlet temperatures between 64.4° and 80.6°F. Furthermore, the society recommends a relative humidity between 20% and 80%.

The Uptime Institute, however, has a different opinion.

The Institute recommends an upper temp limit of 77°F.

However, many data centers run much cooler, especially older ones. IT workers prefer to err on the side of caution to avoid overheating equipment.

Data Center Cooling Calculations

Data Center IT It’s important to understand current conditions before making your data center cooling calculations. For example, you’ll need to assess the current IT load in kilowatts. You’ll also need to measure the intake temperature across your data center. This measurement should include any hotspots.

At a minimum, you want to record the temperature at mid-height. You’ll also want to record the temperatures at the end of each row of racks. Also, you should take the temperature at the top of the rack in the center of each row.

As you take measurements, record the location, temp, date, and time. You’ll need this information later for comparison.

Now, measure the power draw of your cooling unit in kilowatts. Typically, you’ll find a dedicated panel for this measurement on most units. You could also use a separate monitoring system to take this measurement.

You’ll also need to measure the room’s sensible cooling load. You’ll need to measure the airflow volume for each cooling unit for this task. Also, you’ll need to record the supply and return temperatures for each active unit.

Getting to the Math

You can determine a reasonable capacity for each operating unit and kilowatts using the following formula:

Q sensible (kW) = 0.316*CFM*(Return Temperature[°F] – Supply Temperature[°F])/1000
[Q sensible (kW) = 1.21*CMH*(Return Temperature[°C] – Supply Temperature[°C])/3600]

Now, you can compare the cooling load to the IT load to create a point of reference.

Next, you’ll make use of the airflow and return air temperature measurements. You’ll need to contact your equipment vendor for the sensible capacity of each unit in kilowatts. Now, total the sensible capacity for the units that are now in operation. This is about the most simplistic calculation that you’ll find. If you prefer, however, you can find much more complex methods for calculation online.

Next, you’ll need the overall room operating cooling sensible capacity and kilowatts from your measurements. You’ll divide the former by the latter to find the sensible operating cooling load. Now, you have a ratio to use as a benchmark to evaluate subsequent improvements.

Still, it’s important to consult with IT engineers. They can help you determine the maximal allowable intake temperature that will not damage your IT equipment in a new environment. Using your collected data, you can create a work plan to establish your goals. You can also use the information to determine metrics that you’ll monitor to ensure that the cooling environment functions properly.

You’ll also want to develop a back-out plan just in case you have any problems along the way. Finally, you want to pinpoint the performance metrics that you’ll track. For instance, you might track inlet temperatures. Conversely, you may monitor power consumption or other metrics.

Data Center Cooling Best Practices

It can prove challenging to figure out where to start with upgrades for data center environmental control. A few data center cooling best practices can help in this regard. There are many variables that can affect the airflow in your data center. These variables may include the types of data racks. They can even include the cable openings. By following airflow best management practices, however, you can avoid equipment failures. The following strategies can help boost your data center airflow management for improved efficiency:

  • Manage the cooling infrastructurecloud data center
  • Block open spaces to prevent air bypass
  • Manage data center raised floors

What follows are details for these strategies.

Best Practice 1: Manage the Cooling Infrastructure

Data centers use a lot of electricity. For this reason, they need an intense cooling infrastructure to keep everything working correctly. To put this in perspective, according to the US Department of Commerce, the power densities of these facilities, measured in kilowatts (kW) per square foot (ft2) of building space, can be nearly 40 times higher than the power densities of commercial office buildings.

If you need to improve the airflow in your data center, you may want to consider changing the cooling infrastructure. For example, you may reduce the number of operating cooling units to meet the needed capacity. Alternatively, you might raise the temperature without going over your server intake air temperature maximum.

Best Practice 2: Block Open Spaces

It’s vital to close all open spaces under your racks. It’s also important to close open spaces in the vertical planes of your IT equipment intakes.

You must also close any open spaces in your server racks and rows. Spaces here can cause your airflow balance to get skewed.

Also, you’ll want to seal any spaces underneath and on the sides of cabinets as well as between mounting rails. You’ll also want to install rack grommets and blanking panels. In this way, you’ll ensure that there aren’t any unwanted gaps between your cabinets.

Best Practice 3: Manage Data Center Raised Floors

Also, you’ll want to monitor the open area of the horizontal plane of your raised floor. Openings in your raised floor can bypass airflow. This circumstance can also skew the airflow balance in your data center.

You’ll want to manage the perforated tile placement on your raised floor to avoid this problem. You must also seal cable openings with brushes and grommets. Finally, you’ll need to inspect the perimeter walls underneath the raised floor for partition penetrations or gaps.

Choosing a Data Center Cooling Design

There are a few emerging data center cooling methods in the computer room air conditioning (CRAC) space, such as data center water cooling. For example, you might want to consider advanced climate controls to manage airflow.

State-of-the-art data centers incorporate new ways to optimize the cooling infrastructure for greater efficiency. Now, you can enjoy precision data center environmental control with several technologies. These technologies include:

Usually, the data center cooling methods that you choose are driven by site conditions. An experienced consultant can help you to select the right data center cooling design.

Your Partner in Data Center Air Control

Now you know more about data center cooling best practices. What you need now is a well-qualified expert in data center cooling. Data Aire has more than 50 years of experience. We’ve helped firms find innovative answers for  emerging demands.

At Data Aire, we’re a solutions-driven organization with a passion for creativity. Furthermore, we believe in working closely with our clients during the consultative process. We can give you access to extensive expertise and control logic. By partnering with us, you’ll enjoy world-class manufacturing capability recognized by leading international quality certifications.

Contact Data Aire today at (800) 347-2473 or connect with us online to learn more about our consultative approach to helping you choose the most appropriate environmental control system your data center.

Data center numbers are growing – but is your efficiency falling?

The latest AFCOM State of the Data Center report, post-COVID, indicates strong growth in the data center space. This includes cloud, edge, and even colocation space. However, it also noted that many are looking even deeper into efficiency as demand for data, space, and power continues to grow. This special FastChat looks at the very latest data center trends and outlines some of the top data center efficiency designs.

Specifically, you’ll learn about:

  • The latest trends just released from the AFCOM State of the Data Center Report surrounding data center growth and efficiency
  • With speed and scale, come challenges with compromises and cooling
  • Top 3 List: Know what to ask your vendor
  • Top 3 List: Understand which technologies are helping evolve the efficiency of our industry

Learn more about economization solutions from Data Aire. And discover how we can help you scale your data center at your desired pace.

Read our latest guide, which highlights proven ways to conserve energy in your data center.

Adaptive reuse began as a plan to convert classic buildings either for their charm, or to boost the economic preservation of historical buildings. Today, adaptive reuse has a much more pragmatic purpose – enhancing our physical and digital infrastructure. As data creation and demand explodes, owners and developers are finding clever ways to adapt existing structures into data centers.

Join data center leaders and industry innovators explore adaptive reuse, what it means for the industry today and its potential for data centers in the future.

Executive Roundtable

Moderator:
Eric Jensen, VP and GM, Data Aire

Panelists:
Craig Deering, Director of Design and Construction, CyrusOne
Mitchell Fonseca, Senior VP, and GM of Data Center Services, Cyxtera Technologies
Michael Silla, Senior VP, Design Construction, Flexential

Eric Jensen: Adaptive reuse is of course not a new topic. It’s prevalent in commercial real estate, as well as residential. It also happens in the data center sector. Some of the ideas around adaptive reuse as it relates specifically to data center spaces are the market drivers. Maybe that’s the geography relative to the end user or the scale of the facility. Maybe it’s the types of facilities that are being considered for adaptive reuse. What makes a good candidate site versus what doesn’t may dictate how we prioritize among the variety of drivers. And then of course, how we execute on adapting a site for reuse as a data center is a consideration.

Let’s start with you Mitch. You mentioned that you were a believer in the future scarcity of viable data center space. So I’m curious, what kinds of data center space do you expect to become scarce, and then what options do you foresee builders considering to resolve that?

What Buildings Are Good Candidates for Adaptive Reuse?

Mitchell Fonseca: As data centers have become larger, more prevalent within different markets, the best space becomes more scarce, particularly in urban areas where we see a lot of legacy telco hotels. To build additional data centers within that space becomes complicated. Finding the right facility that has the right power and the right connectivity is a challenge. So, adaptive reuse becomes more critical as you look at the potential data center footprint, or pieces of land, buildings, warehouses, and office buildings that could potentially be retrofitted. You need to become flexible with what your designs look like. It’s definitely something that’s top of mind whenever we go out to look at properties. Reuse is more prevalent in urban areas. And, and from my perspective, a lot of the warehouse types of spaces tend to be more flexible when it comes to data center retrofits. Office buildings are obviously a lot more challenging. When it comes down to it, one of the biggest challenges we run into is clear heights. So you need the right building that has the right clear heights to be able to support your data center environment.

Eric Jensen: That’s one side of serving the end user. But I think Craig, you and CyrusOne maybe come at it from a little bit of a different angle as far as what targeted end user you’re trying to serve. What is the availability looking like for the future of data center spaces and how we might need to adapt existing spaces into data centers?

Craig Deering: At CyrusOne, I have converted two warehouse spaces based on end user requirements, and both of those were mainly driven by speed to market. In that, I could avoid a site plan application and I was able to move very quickly to building permits. But going back in time it was actually quite common (particularly based on scale) to adapt office space. Going back to some of my early client base, like Above Net and some of the academic, medical and high performance computing environments that have been successively converted to data centers…it’s almost universally been office space. I see more of that happening in the marketplace as people start talking about workloads and workloads needing to move to the edge, primarily to manage the end user experience and also the communication costs. I see it as a growing market and people looking towards the past to see how some of those problems — like clear height, generator emissions and other things were solved before we started building these massive hyperscale centers as greenfields.

Why is Manufacturing Speed-to-Market an Important Driver?

Eric Jensen: So Mike, I’m wondering…coming at it from your experience, what are you seeing as the driver for specific targeted audiences who need data center space? Whether it be enterprise or co-location or wholesale, what are you seeing as sweet spots and how do those market dynamics drive the need for reuse of existing space that isn’t currently data center?

Michael Silla: I think speed to market is a big driver. The end user is challenged with capacity planning, right? Technology is growing at such a fast pace and that’s the reason why they’re coming to us and the third party markets to help solve their problems. I think Mitch brought up available properties, scarcity of available properties. Where there’s available power today, plus what the utility could scale to. Going back 10, 15 years, or a little longer, everybody left New York and came to New Jersey. PSE&G was right there to build at capacity for a lot of the data centers that entered the region.
And we watched that follow suit around the country. Power companies know that we’re not going to take all 36 or 72 megawatts on day one. We’re all in the same neighborhoods and they’re rationing out the power and they’re building as they go. It’s a business model, too and the reality is we’re not going to find greenfields. So we have to take a look at adaptive reuse of brownfields. And again, we’re looking to find a viable facility with floor load, ceiling heights — your height slab to slab; what’s the minimum requirement that you can make work. We tend to focus on what is our design of record. And how do we adapt that? Whether it’s greenfield, brownfield, multi-story, or campus environments, we need to build that flexibility into the process, having a standard that we can adapt as we go out.

We all struggle when the broker sits across from us and says we’re going to start on your property search, describe what you need. We have to try and give them some parameters to bring us viable solutions without filling the funnel so big. We look at warehouses and distribution centers to be adapted. We did a couple of chip fab facilities that were able to convert into a data center relatively easily. It’s going to be faster to pull permits on an existing facility that’s there today. Again, speed to market is important.

Greenfields vs Brownfields – Data Center Development Strategies

Craig Deering: When I’m building greenfield, I have to cover a spectrum of expectations and requirements. If someone can really focus their expectations and have a good understanding of what their IT kit is going to look like, they can take advantage of some well-located properties. To the point when we’re doing greenfields now, we’re looking at a range of densities to cover a range of customers. There’s still a good amount of workload in that 100 kilowatt to 150 watt per square foot density. And that leads to floor loadings, where you can consider some office properties — with the right strategy. So having a strong cleaver about what your core technical requirements are opens up a lot of possibilities in the adaptive reuse market.

Michael Silla: The modularization of our MEP infrastructure makes a big difference as you look at a brownfield and the surrounding property – looking at where you can place that on the outer perimeter of the building. That opens up opportunity to us, when you start to bring that type of infrastructure. Building plants inside of the building complicates it a little more as well.

Craig Deering: If you have a flex office building – a one story with a 40 by 40 structural grid and 16 foot clear, that’s the kind of criteria where you need a strong cleaver, and then you can focus on the relationship between location and proximity to your end users, your fiber resources, and your power resources. Then you can make a pretty quick decision to go forward. At that point the property is fully entitled and you’re just working with building permits, which certainly gets you down into the desired six to seven month delivery time frame.

What Are the Challenges of Converting Office Spaces to Data Centers?

Eric Jensen: So there’s a couple of really important considerations there. There’s the modularization side of things as it relates to design and selection of site, but then there’s also the type of facility. You’re talking right now about availability of office as a prospect, and right now there’s speculation in the real estate world that there’s going to be an increase in the availability of office space as a result of the [COVID-era] environment that we’re living in. And so I’m curious for the panel, maybe we can get everybody to weigh in on office space as a consideration. What are the pros and cons? I think we’ve touched on some of them, but if you feel as though that either connectivity or power availability makes an existing office space, what seems to be the best available option for you right now? How, then, do you overcome some of those concerns you had when originally thinking, ‘well, I’d much rather have a warehouse or an existing manufacturing facility?’

Mitchell Fonseca: I think office spaces are definitely environments that are doable. We see some of the biggest data centers in the world that are converted office spaces. I think office spaces are a lot more challenging because they normally don’t have the clear heights required to be able to cool environments to the levels that we need. To Craig’s point earlier, if you’re looking at more of a retail environment, that’s going to be usually a lower density from a lot of the hyperscalers’ retail environments, which tend to fit better within that office space environment. It’s probably the model for more of an enterprise type of solution. If you have an enterprise that’s going to build out their own data center I think that it’s more doable within that realm. Once you start getting into the high performance computing, whether it’s high frequency trading — the new environments. And some of that stuff is not really going to work well within an office environment. So that’s where you start getting into pretty heavy modifications to the structure of the facility where it probably wouldn’t make sense. It’s really about the use case. What workloads are you targeting for that environment that are going to dictate whether an office building will work or not?

Eric Jensen: That makes a lot of sense, but it sounds to me like potentially the targeted audience makes a difference for whether office space is truly viable for you. With Flexential in particular, Mike, you’re probably sitting in bit of a unique position in that you’re starting to look at both sides of the middle of the road, as far as size of facility and targeted audience. What would tip the scales to prevent you from pulling the trigger on an existing office space?

Michael Silla: The look, the feel, the densities that we’re seeing today make office buildings more difficult to convert. In a previous role I looked at an office building in a dense area, not a lot of great opportunities. It just wasn’t going to work for a data center that focused on the mix of clients at the time.
The other challenge is that we need the slab heights for the design of record. We need that minimum height, and the reality is, I’ve heard somebody mentioned 13 feet, but that’s tough. We’re looking for a lot more height than that. Looking to the future in a second tier market where folks are trying to get close to the Edge and you’re in urban areas, it’s going to get tougher to find a property. So, depending upon business case and future, you would try and make that work. But right now…where we’re at today, that would be a tough one, trying to make the design of record work, trying to keep it standard.

But as time goes on and the data center market continues to grow, and properties become scarce…the edge comes about. So I think that’s the way we need to look at it.

You can’t rule anything out. And this is what I tell my team, if something doesn’t work today, okay, we park it over in the box here, but you never know when you’re going to go back in and revisit that. So you may wind up back to the future.

Eric Jensen: So really what you’re doing is you’re prioritizing the site over the facility itself, of course. You’re basing decisions on the primary drivers, geography, connectivity, power availability — things of that nature.

Michael Silla: If it’s the right price, it’s worth tearing down.

Modularity as a Component of Data Center Design

Eric Jensen: And so I’m curious, you had mentioned earlier that modularity is a component of the design. Can you touch on how you incorporate modularity and its place in adaptive reuse?

Michael Silla: When you look at modularity, it’s a term that’s widely used, and we’ve seen everything from fully modularized data centers to servers in a box. We like to modularize our components as much as possible. Consider a data center kind of as a product, like a vehicle coming down the assembly line. You need wheels, a steering wheel, a radio. And they’re all made in factories elsewhere and then shipped and bolted on. So if you think about our electrical and mechanical infrastructure as we have designs of record, and we have standard blocks of infrastructure, you can prepackage that equipment in a factory. Then it’s shipped as needed to a location.

And for all intents and purposes, it’s bolted on or assembled to the box being the data center. So your facility, your building is your data center and your infrastructure sits on the outside, or is skidded to the interior of the data center, but sits on the outer perimeter. Longevity is looking at life cycles of data centers; we’ve been through multiple generations, where the rush was to get product to market. You build it a certain way, and we’re finding that as you go back to do upgrades on those facilities, it’s a little tougher and an invasive open heart surgery. Whereas our cooling units 750 KW, you can remove it. If you need a 1200 KW unit, then you replace it with that, the same thing with your infrastructure.

It’s easier to adapt the facility long-term when your infrastructure is sitting on the perimeter of the facility versus trying to do open heart surgery inside. Engineering DesignAnd, you know, we think about that when we’re in design today. And when you’re approaching your concept, your hear everything from ‘Hey, my operator’s going to go in there every morning. And how does he park his car, walk into the building and clear security, go to work? Sales bring prospects, walks them through the facility. Eventually those prospects become clients. How do they go in and function? How do our equipment vendors come and do maintenance on the equipment? How does the fuel truck deliver fuel to the site?’ We’ve put a lot more thought over the past couple of years on the future of these facilities, because some of the facilities that we’ve built are limited to maybe retail because of the characteristics of the envelope and the ceiling heights and the floor heights. But as we’ve moved to the more dynamic data centers at the higher densities that we operate today, we have people still operating it at low to medium density and extreme high density within the same environment. And so we have to put a lot more thought into that as we design the modularized components.

Eric Jensen: So future-proofing, of course, is the panacea. You’ve got to be able to see the future in order to do that. It’s certainly no easy task. But I think modularity has a place, whether you’re thinking about a containerized solution, or a power or mechanical skid of some kind. Craig, I’m wondering, is there also potential to use modularity to go vertical?

Craig Deering: Of course there is; we’re doing it. We are doing it with our designs in Europe, but those are greenfields. I have looked at parking garages in urban locations and solved that problem. I’ve also looked at three to five story suburban office buildings and in order to go vertical, we’ve gotten away from talking about modules. We talk about provisioning and provisioning towards end user density. When we laid out my last project, which was a warehouse conversion, it was on a provisioning range depending on whether it went enterprise or hyperscale — because you’re talking about a hundred watt per square foot swing, or even more between the two users. So on that site, depending on how the building sells and gets provisioned, I can go up to about 24 megawatts and 250 watts per square foot.

Based on our topology, I think we’re somewhere around 16 megawatts in 150 watts per square foot. Plug and play flexibility is key. We look at a building like a glider kit, and if you know what a glider kit is, you know it’s a car you buy that comes with no power train, and you put in whatever power train you want, and that’s sort of the concept that you can use with adaptive reuse. And it’s also great on scaling-in a user. We have a lot of high volume users, but they do still ramp in. One of the advantages we’ve had in doing adaptive reuse is that we can get an end user in very quickly, at very low cost for that initial deployment. And this is the advantage of not having a site plan to file in that time frame. Through a series of incremental building permits in adaptive reuse, we stage in all the capacity.

A very effective prototype when we’re doing adaptive reuse, whether it’s retail warehouse office or a single story building…if I have the right setback so that I could develop the yard space to get all of the chillers, we use air cooled chillers and all of the generators on ground. It’s a very effective delivery method. Because I have this space around the building, as Mike says, just to stack the capacity on an as-needed basis.

Is Designing Data Centers to Support 5G Latency Important?

Eric Jensen: Do any of you have smaller urban data centers in design right now to support evolving 5G latency issues? Or are you already starting to build them?

Craig Deering: I’m not aware of anything we’re doing in response to 5G latency. We do have urban data centers but some of those are legacy facilities, but nothing in my region that I can speak to.

Mitchell Fonseca: We have a number of urban data centers, but we’re not really building or currently planning for an Edge use case that’s more specific to 5G.

Eric Jensen: Typically, by what factor does power demand increase when you convert an asset such as an office building to a data center, 5X, 10X or more?

Mitchell Fonseca: It’s usually significantly more than that. When we’ve had to convert buildings, we’re normally stripping out the entire power infrastructure and transformers and everything, and kind of rebuilding those from scratch. I don’t know if anybody else has a different experience, but it’s normally significantly more than 10X.

Craig Deering: So typically, if you’re picking up an office block, you’re going to be three to five megawatts on a service and you’re going up 20, 30, 40, 50, 60, depending on scale. The smallest facility I have is probably a 12 megawatt facility and we up the service from 2MVA to about 19MVA on day one to build that.

I would say it’s easily on a small facility 10Xs and if it’s a good property and you’re using the existing facility as a core, you could be looking at 20, 30Xs — depending on your ultimate development plan.

What is the Impact of Existing Power Supply or Infrastructure on Data Center Conversion?

Eric Jensen: How much does existing power supply or the building structure impact the possibility of the conversion to a data center? For example, let’s take a life science building that has a larger power supply versus an office building; does that matter much? Or if you need to be in a specific geography, are you just going to build it out?

Michael Silla: The big key is working with the power company to see what they can actually get you. Because at the end of the day, we’re selling power, right? If we’re doing our jobs well, we’re going to have excess space, but we’re going to run out of power first and that’s the game that we’re in.

Craig Deering: Yes, as far as that goes, even a question we ask when we’re looking at an existing facility is what’s at the street? So that’s a question to the power company and then, what’s at the nearest substation because we’re typically looking five to 10 years, or ramping into an ultimate power load at five to seven years. We want to get at least 20 to 30 megawatts on day one in order to build out the first section, and then you’re looking at 60 to 100 by ultimate. And most end users are now comfortable with sourcing from one substation. It’s the rare customer that is asking us for diverse substation feeds. Data centers don’t need to be the fortresses they used to be 25, 30 years ago, because the resiliency is now in the network and the information. That’s how i’s managed; it’s not in the facility.

Michael Silla: It’s rare that we have an RFI or client looking for that and when you start asking, well, why are you looking for this? They always point to the uptime and you say, well, that’s even been relaxed if you actually read it. It’s a matter of having that conversation. It’s just probably something that’s been on an RFP that’s been floating around for two decades.

Eric Jensen: I think the question centered around life sciences as an example. I think you have experience in converting chip fab, Mike. Is that right?

Michael Silla: Yes. Life Sciences or other industrial spaces are definitely viable options indeed and there is power at the street with those facilities. But then again, when you start looking at facilities of that size, we’re going to want a 36 megawatt or larger future capacity in there.

Craig Deering: Let me just add one thing though, because it’s important that everybody understands. If you’re adapting a building, I don’t care how big the power services, you’re not keeping the switchboard. You’re not keeping any of that source material. Because it just doesn’t work for a data center use; it’s not set up correctly. There’s never going to be a position where you’re adapting an existing incoming service. You’re going to originate it back out at the street and you’re going to be looking for a property that has a substation that has a double ended connection to the transmission system. And those are the key things to look for.

How is Airflow Management is Key to Operating Data Centers?

Eric Jensen: As an example of power challenges…Data Aire saw a lot around the One Wilshire project we did in Los Angeles. Power utilization was the primary driver for modernizing and centralizing the whole power and cooling infrastructure there. Also partly because it had just kind of slowly evolved over the decades which is part of what you are going to wind up finding in any kind of legacy type of office space, if it’s of any size and of any age. For anyone reading, adaptive reuse was described as open heart surgery. That’s not a mischaracterization, but there’s also a lot of people who are alive today because of open heart surgery. We’ve seen plenty of folks who need to go into office or warehouse space that is strategically located on the smaller scales. So medium to medium/small types of facilities that have substantial considerations for airflow management. Airflow management becomes the number one thing that you have to be thinking about, which is really what the gentlemen here have been talking about with the importance of those clear-heights. Some additional thoughts…if you’re not going to get the clear heights, then it’s critical that you really pay attention to how you are managing your airflow. Are you going to get the delivery of cold air where you need it to go? And how are you routing everything, whether it’s piping or layout of the infrastructure relative to the ITE, etc.?

Michael Silla: And to add on…number one, make sure your operations team is in the room during design and CFD analysis — and not the perfect environment, but the type of environment that you’ll actually be operating in. Instead of focusing on a perfect environment, we all operate in imperfect environments. Doing that during the conceptual stage will help you because airflow management is the key to operating the data center. We will always be managing that airflow and that’s the key to success; very important facet for our operators.

Eric Jensen: I think it’s a mindset — thinking about it as part of a pre-commissioning activity. Run through a couple of what-if scenarios, what-if I didn’t have that? Or what-if that occurred?

Craig Deering: There’s a user called Power Loft and they had an interesting concept which I’ve looked at adapting for office use where all of the air handling equipment was on one floor and the data center was on the other floor, and they supplied everything from below. And so, if you’re dealing with like low heights, you know if you start getting, and if you’re going to stay with an air base system, right, you can get very creative, particularly with the site source airflow designs, which a lot of people use now, you know, the fan wall design. If you start looking at using space creatively to move large bodies of air, but of course, if you are going with more of a liquid approach, you know, either this was just posted about the barge data centers, but you can certainly use the cooling concept in an office where you really just make the decision that you’re going to do rear door heat exchangers, you know, universally or in rural cooling.

And that opens up a lot of adaptive reuse opportunities. And you can actually mix those with space cooling which is actually something I used to do a lot 35 years ago in order to address, you know, kind of a comprehensive cooling scenario. So find yourself a creative engineer and be laser focused on what your operating parameters are going to be and I think you can go out there and find a lot of buildings in good locations at good prices that can work for your need.

Mitchell Fonseca: I would add the biggest challenge you mentioned there is at what point does it not become economically feasible? So we have amazing engineers in the data center world and some of the stuff that we can do is pretty mind blowing. The challenge is how much are you willing to spend to make that specific building usable? It’s always doable, right? It’s just, how much did it cost you? So that’s where there’s always a fine line between, is it economic or is it doable? And I wouldn’t say that in a lot of case, when you start talking about clear heights is where you start getting into having really unique cooling to make it work. It’s just, is it really worth it?  So there might be a specific reason why you have to stay in that building and that structure. And again, it is doable.

What’s driving Data Aire to provide more efficient and flexible precision cooling systems to the market?

Applications Engineering Manager, Dan McInnis, answers this and other important questions.

What is an HVAC economizer?

An HVAC economizer is a device that is used to save energy consumption. It typically works in concert with an air conditioner. Together, this solution helps minimize power usage. During the cooler months of the year, in many locations, the outdoor ambient air is cooler than the air in the building. Economization is accomplished by taking advantage of that temperature difference between indoor and outdoor ambient conditions, rather than running compressors to provide the cooling.

What is the difference between an airside and a waterside economizer?

An Airside Economizer brings cool air from outdoors into a building and distributes it to the servers. Instead of being re-circulated and cooled, the exhaust air from the servers is simply directed outside. For data centers with water- or air-cooled chilled water plants, a Waterside Economizer uses the evaporative cooling capacity of a cooling tower to produce chilled water and can be used instead of the chiller during the winter months.

Why are HVAC economizer solutions more important than ever?

Economization Cooling Solution

Understand scalable and efficient economizer solutions for data center growth.  Download the guide.

Considering the rising energy costs, HVAC economizer solutions have become a primary concern for mechanical engineers and data center managers I speak with. They must consider energy availability, especially from urban utility providers. Likewise, they need to think about how much money can be saved versus how much energy is being consumed.

In addition, I’m frequently asked about changing state codes and requirements, which force engineers to examine their application designs to assure they meet current standards. It’s become apparent, from the interactions Data Aire has, that our customers are seeking an efficient precision cooling system that can greatly reduce their total cost of ownership.

When are HVAC economizers effective and what should specifying engineers take note of when choosing between an airside or waterside economizer or a pumped refrigerant option?

During the cooler months of the year, in many locations, the outdoor ambient air is cooler than the air in the building. You can accomplish economization by taking advantage of that temperature difference between indoor and outdoor ambient conditions, rather than running compressors to provide the cooling. Airside economization can be accomplished directly by pulling that cool or dry air straight into the building, which is the simplest and most efficient option in many cases.

Waterside economization uses an indirect method of economization and pulls cool water from a cooling tower or dry cooler that is cooled by outdoor air and runs the water through coils inside the HVAC units in the building. Pumped refrigerant also takes advantage of the temperature difference by running a low-pressure refrigerant pump rather than a compressor as the pump consumes less energy, although this solution is less efficient than waterside economization as it uses refrigerant-based heat transfer rather than water.

What examples can you provide that show waterside economization to be efficient across different climate zones?

An example of the efficiency gained from a waterside economizer can be seen with Data Aire’s gForce Ultra, which provides economization (full and partial) for 68% of the year in a dry climate such as Phoenix. Or a humid climate like Ashburn, Va., realizes economization for 75% of the year. Likewise, 98% of the year sees economization in a dry, subtropical climate such as Los Angeles.

View an example of how economization is efficient in data centers.

What new or existing requirements are affecting economization considerations?

Industry standards are under continuous maintenance with numerous energy-savings measures being introduced regularly. ASHRAE Standard 90.1 outlines economizer requirements for new buildings, additions to existing buildings and alterations to HVAC systems in existing buildings. For each cooling system, an airside economizer or fluid economizer is required. Exceptions to this exist, which are outlined in Standard 90.1. When airside economizers are in

place, they must provide up to 100% of the supply air as outdoor air for cooling. Fluid economizers shall be able to provide 100% of the cooling load when outdoor conditions are below a specific range.

Other notable changes include updated climate zone classifications from ASHRAE 169, mandatory requirements for equipment replacements or alterations, which include economization and integrated economizer control and fault detection in direct expansion equipment.

Another important standard is California’s title 24 Energy Standard, which has additional requirements for code compliance on both air and waterside economizers. In additions to standards, numerous technical committees provide recommendations that are beneficial to the performance. ASHRAE TC 9.9 is a technical committee that provides guidelines with updated envelopes for temperature and humidity class ratings. These updates are based on improved equipment ratings.

As Applications Engineering Manager at Data Aire, Dan oversees a team that reviews and modifies HVAC designs submitted to the company by specifying engineers.  In addition, he approves, releases and manages CAD drawings for mechanical, refrigeration and piping projects.  

Exploring the Growing Relationship – Prefab/Modular Data Centers and Precision Cooling

They say marriage is a partnership – one built on trust, flexibility and shared goals. If that’s the case, precision cooling manufacturers and prefab data center or power module designers are a perfect match.

Together, these companies look for ways to seamlessly marry their solutions to meet the goals of the end user. They look to current technology advancements to help guide their strategy recommendations, as well as rely on the tried and true solutions that have supported the data center industry for years.

Let’s start with why some companies choose prefab modular systems. Sometimes geography dictates, or owners’ varied data center strategy lends itself to prefabrication. End-users sometimes find it more convenient to drop in a modular data center because shifting highly complex mechanical projects from the construction site to a controlled production environment may be more cost efficient, safer and faster. So, in these instances, building owners seek manufacturers of modular systems who focus on using all available white space for rack capacity and then collaborate with precision cooling experts like Data Aire.

What’s Driving the Increased Trend Toward Prefab Modular Solutions?

One trend we see is the increase in 4G penetration and an upcoming 5G wave that is further motivating telecom vendors to invest in the modular data center market. And Hyperscalers are deploying large, multi-megawatt modular solutions while others are deploying a single cabinet (or smaller) at the cell tower for 5G. With the growing number of connected devices, the distribution of high-speed data must get closer and closer to those devices at The Edge. That challenge is well-suited for modular solutions. One such solution provider is Baselayer/IE.

Keep in mind, the build-to-suit trend for modular solutions shifts significant portions of the project scope to the factory environment while allowing the site construction scope to run in parallel. And the build plan dictates that precision cooling design and installation needs to be at the front end of the build cycle, so it’s important to source a manufacturer that can develop build-to-suit solutions with speed to market. As an example: “When Baselayer/IE  cut its 180-day build cycle down to a mere 90 days, it needed a partner who could make that transition with it. Data Aire has been able to build systems for our custom data centers faster than installation required,” according to Mark Walters, Director of Supply Chain and Logistics for IE.

More Industry Drivers

Speed to market is not the only driver for implementing prefab modular data centers. Other primary drivers are flexibility in design/capacity, scalability, standardization, IT equipment lifecycles, and trying to stay ahead of the exponential growth in technology. Prefabricated and modular solutions can be scaled up in size “as and when” necessary, which allows operators to stage Capex investment over time. This also avoids the risk of construction projects taking place in “live” data centers. In order to streamline and improve operational efficiency, many operators are looking to standardize their data center portfolios. Prefabricated and modular solutions offer operators a common platform across their portfolios.

It is difficult to build mission critical facilities for technology that has not yet been invented. Because tech life cycles can be as rapid as 18 months, an adaptable solution is custom designed, prefabricated and modular design. The growing use of internet services and connected devices due to AI, IoT, cloud services, etc. is accelerating the demand for smaller data centers at the edge, which is ideal for prefabricated and modular solutions.

Prefab/Modular Data Center Options

While there is a spectrum of prefab/modular solutions, one size or shape doesn’t fit all. ISO shipping containers can provide a readily available, cost effective shell; however, they have fixed designs and are space-constrained — potentially limiting the number of racks that can be deployed. Depending on the end use, ISO containers may also require significant modifications for proper environmental management.

Where ISO containers don’t meet the needs, purpose-built modules like those from Baselayer/IE, afford operators adequate space to maintain or swap equipment within the racks and manage the environment. These module-based solutions are more flexible and can be combined to deliver infinitely configurable open white space.

Scaling to Data Center Customers’ Needs

No matter the space in question, it’s important to sit-down with customers and make them part of the planning equation — discussing their current and future density requirements as well as cooling strategies (whether chilled water, economizers or multiple CRAC units). It’s about partnering and understanding short and long-term goals — making sure to provide maintainable solutions for the end user.

And today, we’re living in an interesting time, when data centers (in the US and other parts of the world) are now considered essential businesses by governments. Being able to adapt technology to the ever-growing needs of data center owners is driving manufacturers to be more agile and develop scalable, built-to-suit solutions. The flexibility of design is imperative to provide the customer exactly what they need, whether for the space, the critical infrastructure, or the IT architecture. And when it comes to modular/prefab designs, prioritizing cooling strategies has become a crucial piece of the puzzle.

 

About Baselayer/IE

With more than 200 MWs of modular capacity currently deployed, paired with four engineering, manufacturing and testing facilities across the United States, Baselayer/IE Corporation is emerging as an industry leader. We design, engineer and manufacture turnkey modular solutions entirely in house.  By controlling the entire process, we quickly adapt to the daily challenges inherent to large-scale construction projects and achieve our customers’ aggressive deadlines. Learn More.