Browse Data Center Coolers

Latest Data Aire Inc. news

Data Aire Shares How To Boost Data Center Cooling ROI With Precise Load Matching
Data Aire Shares How To Boost Data Center Cooling ROI With Precise Load Matching

One thing is certain, optimal data center design is a complex puzzle to solve. With all the options available, no one environmental control system can fit all situations. One must consider all the solutions and technology available to best manage assets and adapt to the evolving data center. There is a precision cooling system for whatever scenario best fits one's current strategy or future goals. The question only remains, has one considered each of the options with the design engineer and the environmental control manufacturer? The two need to be in synch to help maximize the return on investment. In most instances, if one wants an environmental control system that scales with your needs, provide the lowest energy costs, and provides the most reliable airflow throughout the data center, a variable-speed system is the best solution. Nevertheless, one may be curious about what other options may suit the current application. Precise Modulated Cooling Companies need to decide on their strategy and design for it. When one knows they have swings in the load – seasonal, day to day, or even from one corner of the data center or electrical room to the other, one should consider variable speed technology. A system with variable speed technology and accurate control design modulates to match the current cooling load A system with variable speed technology and accurate control design modulates to precisely match the current cooling load. This precision gives the variable speed the highest efficiency at part-load, which equates to a greater return on investment. In other words, when the data center is not running at maximum cooling load a variable speed system will use less energy and save money. If one thinks of the cooling output of the environmental control system as the accelerator of a car — one can press the pedal to almost infinite positions to exactly match the speed one wants to travel. One is not wasting energy overshooting the desired speed. A well-designed control system also ensures a smooth response to a change in load. Further efficiency is gained by accelerating at an efficient rate for the system. Advanced Staged Cooling Looking for something that offers a portion of the benefits of a variable speed system but at a reduced first cost, a multi-stage cooling system can be a good compromise. A multi-stage system will manage some applications well and can reduce overcooling space — as-built today. If one needs greater turndown than what a fixed speed system offers, then this is a good choice. If one finds this to be the right-now solution, one is in good hands. The system is more advanced than a fixed speed unit; it is developed with a level of design optimization to transition its small steps. Unlike digital scroll, this accurate solution, with two-stage compressors, has high part-load efficiency. Example Think about the car accelerator example again; there are many positions to move the accelerator to with a multi-speed system. With two-stage compressors the positions are precise and repeatable, meaning one can smartly change positions to prevent overshoot, and one is more likely to have a position that matches the speed that is desired. Although the return on investment is better with a multi-stage than a fixed-speed system; the benefits are less than with a variable speed system. Fixed-Speed Systems Some consider the entry point for precision cooling based on their current budget constraints Some consider the entry point for precision cooling based on their current budget constraints. So, if one is on a tight budget and needs a lower first cost, then a fixed-speed, single-stage precision cooling system may get the job done. However, this can be short-sighted as energy consumption and costs are higher when the data center is operating at less than the maximum designed cooling load. With experience, this seems to happen quite frequently based on what the mechanical engineer has been asked to design vs. the actual heat load of the space. If a fixed system is applied to the car accelerator example, one can see how only applying 100% throttle or 0% throttle would prevent from getting close to a precise speed. This is clearly not as efficient as the other examples unless one wants to go at the car’s maximum speed all the time. Ramping Up Data Center The needs and goals of a data center can change over time. While the initial objective may only require getting the space in running order, customers may reassess based on changing scenarios. If the data center needs to scale, one may be challenged if haven’t planned ahead with the design engineer for phased buildouts, or perhaps even varying IT load considerations that are seasonal or shift from day to day or even hour to hour. Likewise, one may need to consider the difference between design and actual usage – whether it be too little or too much. Perhaps the IT team says they need two megawatts, or one is going to be running at 16 kW per rack. The cooling system designed may underserve the needs or may be overkill for the current state of usage. In addition, pushing your system to do more than it is engineered for can potentially accelerate the aging of the infrastructure. Again, depending on your application, goals, and business strategy, one of these three systems is right. The best course of action is to evaluate where one is today and then future-proof the data center with technology that can grow.

Data Aire Cooling For High Density Data Centers
Data Aire Cooling For High Density Data Centers

Few data centers live in a world of ‘high’ density, a number that is a moving target, but many are moving to high[er] density environments. Owners of higher density data centers often aren’t aware of how many variables factor into cooling their equipment. The result is that they spend too much on shotgun solutions that waste capacity when they would be better served by taking a rifle shot approach. This means understanding the heat dispersion characteristics of each piece of equipment and optimizing floor plans and the placement of cooling solutions for maximum efficiency. So, how does one invest in today and plan for tomorrow? By engaging early in the data center design process with a cooling provider that has a broad line of cooling solutions, owners can maximize server space, minimize low pressure areas, reduce costs, save on floor space and boost overall efficiency. And by choosing a provider that can scale with their data center, they can ensure that their needs will be met long into the future. Density is Growing Data centers are growing increasingly dense, creating unprecedented cooling challenges. That trend will undoubtedly continue. The Uptime Institute’s 2020 Data Center survey found that the average server density per rack has more than tripled from 2.4 kW to 8.4 kW over the last nine years. While still within the safe zone of most conventional cooling equipment, the trend is clearly toward equipment running hotter, a trend accelerated by the growing use of GPUs and multi-core processors. Some higher-density racks now draw as much as 16 kW per rack, and the highest-performance computing is demanding typically up 40-50 kW per rack. Dedicated Cooling Strategies No two data centers are alike and there is no one-size-fits-all cooling solution For the sake of discussion, let’s focus on the data centers that are, or may be, in the 8.4-16 kW range in the near future. This higher density demands a specialized cooling strategy, yet many data center operators waste money by provisioning equipment to cool the entire room rather than the equipment inside. In fact, “Overprovisioning of power/cooling is probably more common issue than under provisioning due to rising rack densities,” the Uptime survey asserted. No two data centers are alike and there is no one-size-fits-all cooling solution. Thermal controls should be customized to the server configuration and installed in concert with the rest of the facility, or at least six months before the go-live date. Equipment in the higher density range of 8-16 kw can present unique challenges to precision cooling configurations. The performance of the servers themselves can vary from rack to rack, within a rack and even with the time of day or year, causing hotspots to emerge. CRAC equipment Higher-density equipment creates variable hot and cool spots that need to be managed differently. A rack that is outfitted with multiple graphic processing units for machine learning tasks generates considerably more heat than one that processes database transactions. Excessive cabling can restrict the flow of exhaust air. Unsealed floor openings can cause leakages that prevent conditioned air from reaching the top of the rack. Unused vertical space can cause hot exhaust to feed back into the equipment’s intake ducts, causing heat to build up and threatening equipment integrity. Power distribution also needs to be planned carefully and backup power provisioned to avoid loss of cooling For all these reasons, higher-density equipment is not well-served by a standard computer room air conditioning (CRAC) unit. Variable speed direct expansion CRAC equipment, like gForce Ultra scales up and down gracefully to meet demand. This not only saves money but minimizes power surges that can cause downtime. Continuous monitoring should be put in place using sensors to detect heat buildup in one spot that may threaten nearby equipment. Alarms should be set to flag critical events without triggering unnecessary firefighting. Cooling should also be integrated into the building-wide environmental monitoring systems. down flow ventilation There’s a better approach to specifying data center equipment — build cooling plans into the design earlier.  Alternating “hot” and “cold” aisles could be created with vented floor tiles in the cold aisles and servers arranged to exhaust all hot air into an unvented hot aisle. The choice of front discharge, up flow and down flow ventilation can prevent heat from being inadvertently circulated back into the rack. Power distribution also needs to be planned carefully and backup power provisioned to avoid loss of cooling. Thinking through cooling needs early in the data center design stage for higher density data centers avoids costly and disruptive retrofits down the road. The trajectory of power density is clear, so cooling design should consider not only today’s needs but those five and 10 years from now. Modular, and variable capacity systems can scale and grow as needed. The earlier data center owners involve their cooling providers in their design decisions the more they’ll save from engineered-to-order solutions and the less risk they’ll have of unpleasant surprises down the road.

How to Use ASHRAE Data Center Cooling Standards
How to Use ASHRAE Data Center Cooling Standards

If one has ever done anything even remotely related to HVAC, they have probably encountered ASHRAE at some point. The American Society of Heating, Refrigerating and Air-Conditioning Engineers is a widely influential organization that sets all sorts of industry guidelines. Though one don’t technically have to follow ASHRAE standards, doing so can make ones systems a lot more effective and energy efficient. This guide will cover all the basics so that one can make sure ones data centers get appropriate cooling. ASHRAE Equipment Classes One of the key parts of ASHRAE Data Center Cooling Standards is the equipment classes. All basic IT equipment is divided into various classes based on what the equipment is and how it should run. If one has encountered ASHRAE standards before, one may already know a little about these classes. However, they have been updated recently, so it’s a good idea to go over them again, just in case. These classes are defined in ASHRAE TC 9.9. A1: This class contains enterprise servers and other storage products. A1 equipment requires the strictest level of environmental control. A2: A2 equipment is general volume servers, storage products, personal computers, and workstations. A3: A3 is fairly similar to the A2 class, containing a lot of personal computers, private workstations, and volume servers. However, A3 equipment can withstand a far broader range of temperatures. A4: This has the broadest range of allowable temperatures. It applies to certain types of IT equipment like personal computers, storage products, workstations, and volume servers. Recommended Temperature and Humidity The primary purpose of ASHRAE classes is to figure out what operating conditions equipment needs. Once one use ASHRAE resources to find the right class for a specific product, one just need to ensure the server room climate is meeting these needs. First of all, the server room’s overall temperature needs to meet ASHRAE standards for its class. ASHRAE standards always recommend that equipment be kept between 18 to 27 degrees Celsius when possible. However, each class has a much broader allowable operating range.[1] These guidelines are: A1: Operating temperatures should be between 15°C (59°F) to 32°C (89.6°F). A2: Operating temperatures should be between 10°C (50°F) to 35°C (95°F). A3: Operating temperatures should be between 5°C (41°F) to 40°C (104°F). A4: Operating temperatures should be between 5°C (41°F) to 45°C (113°F).[1] Relative humidity One also need to pay close attention to humidity. Humidity is a little more complex to measure than temperature. Technicians will need to look at both dew point, which is the temperature when the air is saturated, and relative humidity, which is the percent the air is saturated at any given temperature. Humidity standards for ASHRAE classes are as follows: A1: Maximum dew point should be no more than 17°C (62.6°F). Relative humidity should be between 20% and 80%. A2: Maximum dew point should be no more than 21°C (69.8°F). Relative humidity should be between 20% and 80%. A3: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 85%. A4: Maximum dew point should be no more than 24°C (75.2°F). Relative humidity should be between 8% and 90%. Tips for Designing Rooms  As one can see, ASHRAE guidelines are fairly broad. Just about any quality precision cooling system can easily achieve ASHRAE standards in a data center. However, a good design should do more than just consistently hit a temperature range. Planning the right design carefully can help reduce energy usage and make it easier to work in the data center. There are all sorts of factors one will need to consider. Since most companies also want to save energy, it can be tempting to design a cooling system that operates toward the maximum allowable ASHRAE guidelines. However, higher operating temperatures can end up shortening equipment’s life span and causing inefficiently operated technology to use more power. Carefully analyzing these costs can help companies find the right temperature range for their system. environmental economizer cooling systems Once one have a desired temperature set, it’s time to start looking at some cooling products. CRAC and CRAH units are always a reliable and effective option for data centers of all sizes. Another increasingly popular approach is a fluid cooler system that uses fluid to disperse heat away from high temperature systems. Many companies in cooler climates are also switching to environmental economizer cooling systems that pull in cold air from the outdoors. Much of data center design focuses on arranging HVAC products in a way that provides extra efficiency. Setting up hot and cold aisles can be a simple and beneficial technique. This involves placing server aisles back-to-back so the hot air that vents out the back flows in a single stream to the exit vent. one may also want to consider a raised floor configuration, where cold air enters through a floor cooling unit. This employs heat’s tendency to rise, so cooling air is pulled throughout the room. By carefully designing airflow and product placement, one can achieve ASHRAE standards while improving efficiency. Data Aire If one has any questions about following ASHRAE Data Center Cooling Standards, turn to the experts! At Data Aire, all of the technicians are fully trained in the latest ASHRAE standards. They are happy to explain the standards to one in depth and help one meet these standards for ones data room. Their precision cooling solutions provide both advanced environmental control and efficient energy usage.

vfd