Infrastructure | Power, Cooling and Racks for HPC Data Centers

Having the computing power of a High Performance Computer doesn’t just require the compute nodes, but also the infrastructure for housing the equipment. This includes power, cooling and rack space infrastructure. You can now pack in multiple nodes with multiple CPUs with a high number of cores, and many accelerators such as GPUs and Xeon Phis into one (1) to two (2) Rack Units. While this saves on rack space, more, and more efficient power and cooling is essential to the proper operation of your equipment.

As you transform your data center performance and reliability through convergence, our rack, power and cooling solutions give you the maximum level of integration alignment to help ensure that your converged infrastructure can handle new and data-intensive workloads in any environment. Our rack, power and cooling solutions are designed and tested for ultimate reliability and compatibility with Aspen Systems servers and data center management solutions.



For racks, Aspen Systems mostly uses racks from APC. The NetShelter line of racks (shown below) is one of the most widely used racks in the HPC space and we also use specially designed pallets for shipping the racks, fully racked and cabled. Should our customers prefer other racks, our Engineers have experience using other racks as well including those from Raritan, Tripp Lite, ServerRack and others. Available in 24U, 42U, 45U, and 48U heights, 24 in. and 30 in. widths, and 32.5 in., 42 in., and 47.5 in. depths.

APC Infrastructure Logo

When configuring your solution, Aspen Systems Engineers will draw out a full rack diagram with locations of each server, switch, and other equipment. They will then make sure proper cable lengths are supplied to you. The end solution is a properly built up rack with neat cabling, and power and cooling requirements met. Read more about APC NetShelter Rack Enclosures

NetShelter SX Enclosure Features

More standard features for faster installation

Cable Access Roof

  • Eight cable entry slots
  • Toolless mounting of overhead cable trough system
  • Snap-in mounting to allow easy roof removal and installation with cables in place

Integrated Baying Brackets

  • Preinstalled on frame, front and back
  • Spacing options at 24 in.
  • Bays with other Schneider Electric power and cooling products

Leveling Feet & Castors

  • Easily adjustable leveling from top-down
  • Castors standard on all enclosures

APC NetShelter SX Enclosure Features

Zero-U Accessory Channel

  • Toollessly mount rack PDUs
  • Toollessly mount vertical cable organizers

Vertical Mounting Rails

  • Simple screw and cam engagement
  • Captive screws — no loose hardware
  • Easy visual alignment for quick adjustment
  • Adjust in 0.25 in. increments through enclosure
  • Hole cutouts for 0U installation (AR8469) of Data Distribution Cable (DDC) accessories

Half-Height Side Panels

  • Easy and safe handling
  • Quick release latch
  • Lockable — same key as doors
  • Enclosure width remains the same with or without sides attached

The NetShelter SX is a multi-functional rack enclosure influenced by customer feedback from around the world. These enclosures are designed to meet current IT market trends and applications ranging from high density computing and networking to broadcast and audio-video. With a strong focus on cooling, power distribution, cable management and environmental monitoring, the NetShelter SX rack enclosure provides a reliable rack-mounting environment for mission-critical equipment.



Proper cooling is needed for use of your equipment. While some servers can run hotter than others, there’s research showing how heat can shorten the life of some computing equipment. At Aspen Systems, we include the cooling requirements of our solutions into our proposals. This allows our customers to be sure they have enough cooling in their data centers, before the equipment is purchased.

There’s multiple ways to cool your data center. One of the more traditional ways data centers are being cooled is by Computer Room Air Conditioning (CRAC) or Computer Room Air Handler (CRAH) units. These are large air conditioning units which cool the entire data center. This usually has either a front to back cooling. While this method is commonly used, it is not always the most efficient, as the hot isles aren’t always very well contained. Also, you most likely would need a raised floor for best efficiency.

APC has come up with one way to contain the hot aisle, and that is by creating a hot aisle containment and using in-row coolers. These in-row coolers sit in between the racks, and take the hot air from the hot aisle (contained by a roof and doors), and cool that air and blow it into the cold aisle. A raised floor is not necessary for this model.

InRow Infrastructure Cooling for Data Centers
Example of InRow Cooling, with Enclosed Heat Isle
InRow RA, 300mm, Pumped Refrigerant, 100-120V, 50/60Hz
CRAC Example where Cold Air Enters from Raised Floor.

In today’s data centers, traditional cooling approaches involve complex air distribution systems that tend to be unpredictable. With InRow cooling, placing the unit in the row of racks moves the source of cooling closer to the heat load, minimizing air mixing and providing a predictable cooling architecture.

Data centers around the globe are being mandated to simultaneously increase energy efficiency, consolidate operations and reduce costs. Each year, data centers consume approximately two percent of global power consumption.

Proper cooling in the data center is becoming more problematic as rack power density continues to increase. Server consolidation, virtualization, and HPC all contribute to rack power density in the attempt to improve energy and floor‐space efficiencies. However, as power densities increase per rack, the wasted heat output also increases. Therefore, the traditional raised‐floor cooling approach (designed for 3kW/rack) is no longer suitable or efficient for racks with power densities approaching 30kW. Furthermore, improper cooling of your facility and equipment can result in unpredictable software/data anomalies and increased hardware failures.

As critical as the cooling of a data center is, designing and optimizing a cooling system is a complicated undertaking and a much researched topic. Yet, the following strategies can help to ensure that proper cooling is achieved by adhering to a few simple principles. Simple improvements to your cooling system can then be realized, without oversizing your current cooling system and thereby driving up your capital and operational costs with unnecessary expenditures.

High Efficiency Liquid Cooling for CPUs & GPUs

Asetek has leveraged its expertise as the world leading provider of efficient liquid cooling systems to create solutions for data centers that address these mandates by providing energy savings, cost savings, density increases, and noise reduction. Because liquid is 4,000 times better at storing and transferring heat than air, Asetek’s solutions provide immediate and measurable benefits to large and small data centers alike.

RackCDU D2C (Direct to Chip) is a “free cooling” solution that captures between 60% and 80% of server heat, reducing data center cooling cost by over 50% and allowing 2.5x-5x increases in data center server density. D2C removes heat from CPUs, GPUs and memory modules within servers using water as hot as 40°C (105°F), eliminating the need for chilling to cool these components.

Chilling is the largest portion of data center cooling OpEx and CapEx costs. With RackCDU D2C, less air needs to be cooled and moved by Computer Room Air Handlers (CRAH) or Computer Room Air Conditioning (CRAC) units. Further, liquid cooled servers need less airflow resulting in more energy efficient servers.

Asetek Logo
SYS-6018R-WTR with Direct to Chip Cooling
SYS-6018R-WTR with Direct to Chip Cooling

Motivair Chilled Door Logo
Motivair Chilled Door
Motivair Chilled Door

A Whole New Cooling Experience

Motivair has a long history of cooling critical process facilities and production equipment. From its earliest days of industrial compressed air treatment to today’s cutting edge chiller and cooling system design and implementation, the Motivair brand is synonymous with process cooling.

The ChilledDoor Rack Cooling System isn’t just a way to remove 100% of the heat from your server rack, it’s a way to change the dynamic of how your data center is cooled. By using advanced “Active” rear door heat exchanger technology, your cooling system becomes a dynamic entity, reacting to minute by minute changes in compute loads of up to 75kW. Whether you’re cooling advanced High Performance Computers, high end storage, or simple switchgear, the ChilledDoor works to keep your computing environment “heat neutral”.

The ChilledDoor’s smooth metal surface connects seamlessly to your server rack. It moves cooling directly to the rack level where infrastructure meets hardware in perfect unison creating next generation cooling for mission critical IT.

The Chilled Door has the ability to completely transform the way your data center is cooled. Its unique ability to utilize warmer water sources such as cooling towers, river water and high set point Free Cooling chillers creates efficiencies that extend well beyond the white space. Imagine improving your data center efficiencies by up to 70%. Tomorrow’s PUE targets are available today.

A Fully Integrated Close-Coupled Cooling Solution

OptiCool Technologies is a leading provider of innovative data center cooling solutions. OptiCool specializes in refrigerant-based, close-coupled cooling solutions designed to support a wide variety of data center applications, including both low-density to high-density.

The CDS is mounted at the rear of the IT equipment (typically server) cabinet and removes the heat from the exhaust air. The CDS is modular and can include up to 3 AHX cooling units per door. The design of the CDS allows easy access to the IT equipment by opening and allowing full access to the back of the rack similar to a standard rear enclosure door. The CDS is made of lightweight materials and is readily available in standard cabinet sizes. In addition, there are many cabinet manufacturers that offer “OptiCool Ready” products, where the CDS attaches directly to the cabinet using existing mounting points. For retro-fits we offer door transition kits (DTK) for seamless integration for all major manufacturers cabinets.

OptiCool Technologies award-winning OptiCool data center solution is the latest in technology innovations that maximize efficiencies in cooling, power utilization and space within the data center. This solution allows for adaptability to any rack or cabinet and is flexible and scalable enough to meet the needs of the data center today and in the future.

OptiCool Logo
OptiCool Cool Door System
OptiCool Cool Door System

CoolIT Logo
CoolIt Rack DCLC
CoolIt Rack DCLC CHx40

Effective, Reliable & Easy-to-Integrate Rack DCLC Solutions

CoolIT Systems is the world leader in energy efficient liquid cooling solutions for the HPC, Cloud and Enterprise markets. As an experienced innovator with 52 patents and more than 2 million liquid cooling units deployed in desktop computers, servers and data centers around the world, CoolIT Systems Direct Contact Liquid Cooling (DCLC) technology is the top choice for OEMs and system integrators.

Rack DCLC modules are designed for a flexible fit to benefit various compute environments. While Server Modules and Manifold Modules are installed with each system and are local to the rack, the appropriate heat rejection method may vary. Rack DCLC offers a variety of heat exchanging modules depending on load requirements and availability of facility water.

Energy and Space Efficient Data Center Cooling Solutions

Coolcentric delivers the world’s most energy and space efficient cooling solutions and data center cooling equipment for reducing data center costs. Coolcentric patented products for rack-level cooling allow customers to optimize data centers for maximum performance and return on investment.

The Coolcentric family of heat exchangers comprises of passive, liquid cooled heat exchangers, Rear Door Heat Exchangers (Standard RDHx and Low Density RDHx-LD) replace standard rear doors on IT rack enclosures, and the Sidecar is an in-row heat exchanger. Close-coupled to the IT enclosure, the heat exchangers bring cooling as close to the heat source as possible thus providing the ultimate containment solution. Taking up a minimum of floor space, the Coolcentric heat exchangers are flexible, efficient and space-saving turn-key cooling solutions.

The Coolcentric RDHx-LD is a new member of the Coolcentric family of rear door heat exchangers designed to extend available energy and cooling efficiencies, and resulting cost savings, to installations with low to medium rack densities of 5-12 kW. The RDHx-LD can be intermixed with other RDHx family products in a data center installation to accommodate low, medium and high density requirements.

Coolcentric Logo
Coolcentric Rear Door Heat Exchanger - Low Density (RDHx-LD)
Coolcentric Rear Door Heat Exchanger

Remember that some of the solutions described are less costly than others. Some of the more cost beneficial cooling solutions are to ensure that your current system is fully optimized and serviced, while properly sealing the floor and adopting row‐based/hot‐aisle arrangements. Whether you require modular systems for close‐to‐the‐source hot‐zone targeting, or a fully contained thermally‐neutral system, Aspen Systems can help you decide on the appropriate cooling architecture for your HPC system.

Improve Your Data Center’s Power Efficiency

Improving a data center’s power efficiency will greatly reduce the necessary cost to cool the facility. Utilizing higher voltages and the latest UPS/PDUs will improve overall energy efficiencies and thereby decrease wasted heat output. Remember that 1 Watt of power consumed requires 1 Watt of cooling. So, understanding your heat/wattage output will also help you to configure the optimal and most efficient cooling solution. See the section on Power (below) for a more detailed explanation about the benefits of using higher voltages and latest generation power components.



With more compute power being packed into each system, the requirements for power are becoming greater. Gone are the days where a few single phase 120V PDUs would do the trick in every rack. We are now using 208V three-phase PDUs in most racks. At Aspen Systems, we can help you with figuring out how much power is needed for your solution, and include it into our proposals. Alternatively, if given the available amount of power (and cooling) per rack, our Engineers can customize a rack and power layout for you.

Power Distribution Units (PDUs)

Various APC Metered Vertical Server Rack PDUs

Modern clusters can require significant power. Each rack in your Aspen Systems cluster is normally equipped with rack mounted PDUs which provide power to one or more nodes. Each unit is an important part of phasing power to your servers. Normally, one or more PDUs are installed in the rear of the rack behind the node mounting infrastructure and do not impact the rack space available for mounting your other hardware. Aspen Systems can provide your cluster with switched PDUs. Cluster administrators can use these switched PDUs to remotely power off any system in the cluster. Metered PDUs are available as well, which administrators can poll for circuit status and load. Various APC metered vertical server rack PDUs pictured left.

Growing demand for computing power and constrains on physical space have led to every more densely packed rack enclosures. And as the number of rack-mounted servers, blade servers, network switches and routers has increased, so has the need for power in the rack.

These PDUs are usually connected directly to outlets on the wall, under your raised floor, or in your overhead rack infrastructure. They can also be connected to UPS units which are located in your Aspen Systems rack(s) or elsewhere.

Plug Types

One of the most common customer errors is to specify incorrect plug types for their cluster installation. National Electrical Manufacturers Association (NEMA) plugs and receptacles are commonly used in North America, and use designators such as “NEMA L6-30R” to identify receptacle and plug types. The “R” stands for receptacle, which is the receptacle you provide at your facility to plug your cluster into, while “P” stands for plug.

Various HPC Plug Types

You may need to consult with your electrical personnel at your facility to determine exactly what receptacles you have or can support. Speak with your Aspen Systems sales engineer if you are confused about your options or what type of receptacles your current facility has.

UPS Systems

APC Smart-UPS SRT 6000VA RM 230V

You may have a facility that already has a UPS (Uninterruptible Power Supply) or even a generator. A UPS unit is necessary to insure that no power interruption occurs, even if you have a generator. A UPS unit alone will only keep the units powered for a limited amount of time, usually less than 30 minutes, but can be used in conjunction with a facility generator to insure that your infrastructure continues to run even in extended power outages.

It is always a good idea to protect your critical systems, normally the master, any fail‐over masters, administrative, and storage nodes, with UPS systems if your facility does not have them. Operating these nodes on UPS ensures that a sudden power blackout (power is lost completely) or brownout (low voltage levels), or dropout (momentary total loss of power) does not cause these nodes to crash, which risks file system or hardware damage. Brownout or dropout situations are transparent to a node protected by UPS, and if the blackout lasts long enough to drain the UPS battery, monitoring software can effect an orderly shutdown of the node(s) to minimize possible file system damage and facilitate a clean reboot process later.

Additional UPS runtime can be configured for your cluster by adding additional battery packs to your UPS system(s) as well. Speak to your Aspen Systems sales engineer about your specific UPS needs.

Eaton Logo
Eaton 5PX UPS
Eaton 5PX UPS

Reliable. Versatile. Powerful. Efficient.

Eaton provides energy-efficient solutions that help customers effectively manage electrical power more efficiently, safely and sustainably. Combining extended runtime capabilities and exceptional efficiency, the Eaton 5PX UPS is a powerful enterprise class backup solution. An ENERGY STAR qualified UPS, the 5PX’s managed outlet segments allow you to monitor energy consumption down to the outlet level on its intuitive LCD screen, while convenient virtualization-ready bundles and Intelligent Power Software Suite enable seamless management in virtualized environments. Read more about the Eaton 5PX UPS

Eaton 9PX UPS

Eaton 9PX UPS integrates seamlessly into just about any IT environment. It’s ideal for keeping servers, switches, voice/data networks, and storage systems online, supporting your overall goal of business continuity. Delivering premium backup power and scalable battery runtimes for servers, voice/data networks and storage systems, the Eaton 9PX UPS is the ideal solution for both rack and stand-alone installations. Read more about the Eaton 9PX UPS