Data centre cooling
Data processing environments were estimated in 2006 to consume about 1.5% of the total electricity in the US. Data centre power consumption has roughly doubled in the last 5 years and is expected to double again in the next 5 years to more than 100 billion kWh. Estimates of power costs for US data centres now range as high as 3.3 billion USD. This increase has coincided with the adoption of new blade server technology. Traditional data centres were intended to accommodate 2 to 3 kW per rack. However, power requirements for blade servers today can be as high as 20 to 30 kW. In addition to increased power supply requirements, the new high-density environments - with numerous blades packed tightly into a rack - generate significantly more heat than traditional servers, and therefore require more cooling capacity. A survey conducted by Emerson Network Power, found that 64% of all data centres will not have enough electricity to handle all critical computing functions by 2011. However, solutions exist: a 2007 EPA report to the US Congress concluded that reduction of data centre energy consumption by 50% by 2011 was possible. For example, most data centres have implemented best practices such as the hot-aisle/cold-aisle rack arrangement. Potential exists in sealing gaps in floors, using blanking panels in open spaces in racks and avoiding mixing of hot and cold air. Computational fluid dynamics (CFD) can be used to identify inefficiencies and optimize airflow. Recent technologies, such as digital scroll compressors and variable frequency drives in computer room air conditioners (CRACs) allow high energy efficiencies to be maintained at partial loads. High-density data centres up to 30 kW per rack require supplemental cooling units, which are mounted above and alongside equipment racks, and pull hot air directly from the hot aisle and deliver cold air to the cold aisle. Compared with conventional CRACS, supplemental cooling units can reduce cooling costs by up to 30%.