The rapid growth in cloud computing, the Internet of Things (IoT), and data processing via Machine Learning (ML), have greatly increased our need for computing resources. Given this rapid growth, it is expected that data centers will consume more and more of our global energy supply. Improving their energy efficiency is therefore crucial. One of the biggest sources of energy consumption is the energy required to cool the data centers, and ensure that the servers stay within their intended operating temperature range. Indeed, about 40% of a data center’s total power consumption is for air conditioning[1].
Here, we study how the server air inlet and outlet, as well as the CPU, temperatures depend upon server loads typical of real Internet Protocol (IP) traces. The trace data used here are from Google clusters and include the times, job and task ID, as well as the number and usage of CPU cores. The resulting IT loads are distributed using standard load-balancing methods such as Round Robin (RR) and the CPU utilization method.
Experiments are conducted in the Data Center Laboratory (DCL) at the Georgia Institute of Technology to monitor the server outlet air temperature, as well as real-time CPU temperatures for servers at different heights within the rack. Server temperatures were measured by on-line temperature monitoring with Xbee, Raspberry PI, Arduino, and hot-wire anemometers. Given that the temperature response varies with server position, in part due to spatial variations in the cooling airflow over the rack inlet and the server fan speeds, a new load-balancing approach that accounts for spatially varying temperature response within a rack is tested and validated in this paper.