Browse Data Center Coolers

Latest Data Aire Inc. news

Data Aire Shares Insight On Whether Data Center Cooling Units Talk About Hot Spots
Data Aire Shares Insight On Whether Data Center Cooling Units Talk About Hot Spots

In an ever-changing environment like the data center, it’s most beneficial to have as many intelligent systems working together as possible. It’s amazing to think of how far technology has come from the old supercomputers the size of four filing cabinets to the present data centers that are pushing 1,000,000 Sq. Ft. Managing a Data Center’s Needs Historically, managing a data center was fairly straightforward. In all the growth, one finds digging into the nuisances of every little thing data center cooling, power, and rack space, among hundreds of other minute aspects. This is all way too much for a Data Center Manager to be able to manage and comprehend by themselves, so implementing systems that can talk to each other has become a must. When evaluating the cooling side of the infrastructure, there are real challenges that may make one want to consider hiring a team of engineers to monitor the space constantly. Most sensible room capacities vary constantly during the first year or two years of build-out. This creates a moving target for CRAC/CRAH systems to hit within precise setpoints, and this can create a lot of concern by data center managers about hot spots, consistent temperatures, and CRAC usage. Just when one thinks the build-out is done, someone in the team decides to start changing hardware and one is headed down the path of continuous server swap outs and capacity changes. It really can turn into a game of chasing the own tail, but it doesn’t have to. Reducing Stress Level Zone Control is the creation of open communication and dialogue between all units on the team Data Aire has created the Zone Control controller to address the undue stress imposed on data center managers. Zone Control allows CRAC and CRAH units to communicate with each other and deduce the most efficient way possible to cool the room to precise set-points. No longer will one need to continually adjust set-points. And as previously mentioned, it’s incredibly beneficial to have as many intelligent systems working together as possible. Zone Control is the creation of open communication and dialogue between all units on the team. CRAC & CRAH Units Should Work Together Just like in sports, all players on the team must know their own personal role and how all players doing their part creates the most efficient team. The 2018 winter Olympics makes one think about the similarities between a four-man bobsled team and how CRAC/CRAH units communicate through Zone Control. The bobsled team starts off very strong to give as much of a jump out of the box as possible. Then each member starts hopping into the bobsled in a specific order. Once they assume maximum speed, all members are in the bobsled and most are on cruise control — while the leader of the team steers them to the finish line. That’s Zone Control; the leader of the team. Personal Precision Cooling Consultant Getting back to data center cooling. When the units first start ramping up – they do so to ensure enough cooling immediately. Then as our controls/logic gets the readings of the room back, these units start to drop offline in standby mode to vary down to the needed capacity of the room. They are able to talk to each other to sense where the hotter parts of the room are to ensure the units closest to the load are running. Once they have gone through this process of checks and balances to prove the right cooling capacities, they go into cruise control as Zone Control continues to steer. Combine the Ultra with Zone Control and one has the smartest and most efficient CRAC system in the industry This creates the most efficient and most reliable setup of cooling in each individual data center possible. Data center managers don’t need to worry about trying to find hot spots or worry about varying loads in the room. Zone Control is an intelligent communication control that works with CRAC/CRAH data room systems to identify the needs of the space and relay that message to the team. Think of it as the personal precision cooling consultant that always has the right system setup based on real-time capacities. Add V3 Technology to the Zone Control Team One can go even a step further in the quest to have the most efficient environmental control system safeguarding the data center. Pair Zone Control with gForce Ultra. The Ultra was designed with V3 Technology. It is the only system on the market to include a technology trifecta of Danfoss variable speed compressors accompanied by an EEV and variable speed EC fans. gForce Ultra can vary down to the precise capacity assignments in the data room. Combine the Ultra with Zone Control and one has the smartest and most efficient CRAC system in the industry. The Zone Control even has the logic to drop all Ultra units in a room down to a 40% capacity and run as a team in cruise control, versus running half the units at 80% because of the efficiency written in the logic.

Data Aire Shares How To Boost Data Center Cooling ROI With Precise Load Matching
Data Aire Shares How To Boost Data Center Cooling ROI With Precise Load Matching

One thing is certain, optimal data center design is a complex puzzle to solve. With all the options available, no one environmental control system can fit all situations. One must consider all the solutions and technology available to best manage assets and adapt to the evolving data center. There is a precision cooling system for whatever scenario best fits one's current strategy or future goals. The question only remains, has one considered each of the options with the design engineer and the environmental control manufacturer? The two need to be in synch to help maximize the return on investment. In most instances, if one wants an environmental control system that scales with your needs, provide the lowest energy costs, and provides the most reliable airflow throughout the data center, a variable-speed system is the best solution. Nevertheless, one may be curious about what other options may suit the current application. Precise Modulated Cooling Companies need to decide on their strategy and design for it. When one knows they have swings in the load – seasonal, day to day, or even from one corner of the data center or electrical room to the other, one should consider variable speed technology. A system with variable speed technology and accurate control design modulates to match the current cooling load A system with variable speed technology and accurate control design modulates to precisely match the current cooling load. This precision gives the variable speed the highest efficiency at part-load, which equates to a greater return on investment. In other words, when the data center is not running at maximum cooling load a variable speed system will use less energy and save money. If one thinks of the cooling output of the environmental control system as the accelerator of a car — one can press the pedal to almost infinite positions to exactly match the speed one wants to travel. One is not wasting energy overshooting the desired speed. A well-designed control system also ensures a smooth response to a change in load. Further efficiency is gained by accelerating at an efficient rate for the system. Advanced Staged Cooling Looking for something that offers a portion of the benefits of a variable speed system but at a reduced first cost, a multi-stage cooling system can be a good compromise. A multi-stage system will manage some applications well and can reduce overcooling space — as-built today. If one needs greater turndown than what a fixed speed system offers, then this is a good choice. If one finds this to be the right-now solution, one is in good hands. The system is more advanced than a fixed speed unit; it is developed with a level of design optimization to transition its small steps. Unlike digital scroll, this accurate solution, with two-stage compressors, has high part-load efficiency. Example Think about the car accelerator example again; there are many positions to move the accelerator to with a multi-speed system. With two-stage compressors the positions are precise and repeatable, meaning one can smartly change positions to prevent overshoot, and one is more likely to have a position that matches the speed that is desired. Although the return on investment is better with a multi-stage than a fixed-speed system; the benefits are less than with a variable speed system. Fixed-Speed Systems Some consider the entry point for precision cooling based on their current budget constraints Some consider the entry point for precision cooling based on their current budget constraints. So, if one is on a tight budget and needs a lower first cost, then a fixed-speed, single-stage precision cooling system may get the job done. However, this can be short-sighted as energy consumption and costs are higher when the data center is operating at less than the maximum designed cooling load. With experience, this seems to happen quite frequently based on what the mechanical engineer has been asked to design vs. the actual heat load of the space. If a fixed system is applied to the car accelerator example, one can see how only applying 100% throttle or 0% throttle would prevent from getting close to a precise speed. This is clearly not as efficient as the other examples unless one wants to go at the car’s maximum speed all the time. Ramping Up Data Center The needs and goals of a data center can change over time. While the initial objective may only require getting the space in running order, customers may reassess based on changing scenarios. If the data center needs to scale, one may be challenged if haven’t planned ahead with the design engineer for phased buildouts, or perhaps even varying IT load considerations that are seasonal or shift from day to day or even hour to hour. Likewise, one may need to consider the difference between design and actual usage – whether it be too little or too much. Perhaps the IT team says they need two megawatts, or one is going to be running at 16 kW per rack. The cooling system designed may underserve the needs or may be overkill for the current state of usage. In addition, pushing your system to do more than it is engineered for can potentially accelerate the aging of the infrastructure. Again, depending on your application, goals, and business strategy, one of these three systems is right. The best course of action is to evaluate where one is today and then future-proof the data center with technology that can grow.

Data Aire Cooling For High Density Data Centers
Data Aire Cooling For High Density Data Centers

Few data centers live in a world of ‘high’ density, a number that is a moving target, but many are moving to high[er] density environments. Owners of higher density data centers often aren’t aware of how many variables factor into cooling their equipment. The result is that they spend too much on shotgun solutions that waste capacity when they would be better served by taking a rifle shot approach. This means understanding the heat dispersion characteristics of each piece of equipment and optimizing floor plans and the placement of cooling solutions for maximum efficiency. So, how does one invest in today and plan for tomorrow? By engaging early in the data center design process with a cooling provider that has a broad line of cooling solutions, owners can maximize server space, minimize low pressure areas, reduce costs, save on floor space and boost overall efficiency. And by choosing a provider that can scale with their data center, they can ensure that their needs will be met long into the future. Density is Growing Data centers are growing increasingly dense, creating unprecedented cooling challenges. That trend will undoubtedly continue. The Uptime Institute’s 2020 Data Center survey found that the average server density per rack has more than tripled from 2.4 kW to 8.4 kW over the last nine years. While still within the safe zone of most conventional cooling equipment, the trend is clearly toward equipment running hotter, a trend accelerated by the growing use of GPUs and multi-core processors. Some higher-density racks now draw as much as 16 kW per rack, and the highest-performance computing is demanding typically up 40-50 kW per rack. Dedicated Cooling Strategies No two data centers are alike and there is no one-size-fits-all cooling solution For the sake of discussion, let’s focus on the data centers that are, or may be, in the 8.4-16 kW range in the near future. This higher density demands a specialized cooling strategy, yet many data center operators waste money by provisioning equipment to cool the entire room rather than the equipment inside. In fact, “Overprovisioning of power/cooling is probably more common issue than under provisioning due to rising rack densities,” the Uptime survey asserted. No two data centers are alike and there is no one-size-fits-all cooling solution. Thermal controls should be customized to the server configuration and installed in concert with the rest of the facility, or at least six months before the go-live date. Equipment in the higher density range of 8-16 kw can present unique challenges to precision cooling configurations. The performance of the servers themselves can vary from rack to rack, within a rack and even with the time of day or year, causing hotspots to emerge. CRAC equipment Higher-density equipment creates variable hot and cool spots that need to be managed differently. A rack that is outfitted with multiple graphic processing units for machine learning tasks generates considerably more heat than one that processes database transactions. Excessive cabling can restrict the flow of exhaust air. Unsealed floor openings can cause leakages that prevent conditioned air from reaching the top of the rack. Unused vertical space can cause hot exhaust to feed back into the equipment’s intake ducts, causing heat to build up and threatening equipment integrity. Power distribution also needs to be planned carefully and backup power provisioned to avoid loss of cooling For all these reasons, higher-density equipment is not well-served by a standard computer room air conditioning (CRAC) unit. Variable speed direct expansion CRAC equipment, like gForce Ultra scales up and down gracefully to meet demand. This not only saves money but minimizes power surges that can cause downtime. Continuous monitoring should be put in place using sensors to detect heat buildup in one spot that may threaten nearby equipment. Alarms should be set to flag critical events without triggering unnecessary firefighting. Cooling should also be integrated into the building-wide environmental monitoring systems. down flow ventilation There’s a better approach to specifying data center equipment — build cooling plans into the design earlier.  Alternating “hot” and “cold” aisles could be created with vented floor tiles in the cold aisles and servers arranged to exhaust all hot air into an unvented hot aisle. The choice of front discharge, up flow and down flow ventilation can prevent heat from being inadvertently circulated back into the rack. Power distribution also needs to be planned carefully and backup power provisioned to avoid loss of cooling. Thinking through cooling needs early in the data center design stage for higher density data centers avoids costly and disruptive retrofits down the road. The trajectory of power density is clear, so cooling design should consider not only today’s needs but those five and 10 years from now. Modular, and variable capacity systems can scale and grow as needed. The earlier data center owners involve their cooling providers in their design decisions the more they’ll save from engineered-to-order solutions and the less risk they’ll have of unpleasant surprises down the road.

vfd