IBM returned to indirect (cold plate) water cooling in 2008 with the introduction of the Power 575 Supercomputing Node [1]. The node, packaged in a super-dense 2U (88.9 mm) form factor, contains 16 dual core processor modules. An assembly of 16 cold plates was developed to cool the processors. The assembly consists of the cold plates (one cold plate for each processor module), copper tubing that connects 4 groups of 4 cold plates in series, copper tubing that connects each grouping of 4 cold plates, or quadrant, to a common set of supply and return headers, and two flexible EPDM hoses that connect the headers to system level manifolds in the rack housing the nodes (a rack can contain up to 14 nodes). Non-spill poppeted quick connects are used to connect the cold plate assembly to the system level manifolds. In addition to a detailed description of the cold plate assembly, an overview will be given of the analysis and design that went into its development. Conjugate computational fluid dynamics (CFD) modeling was done on the cold plate and processor module combination. CFD modeling was done on the headers to verify proper flow balancing. Finally, mechanical finite element analyses were performed to determine the cold plate tube routing necessary to minimize reactionary forces the tubes placed on the cold plates under land grid array loading of the module to the board.

This content is only available via PDF.
You do not currently have access to this content.