Research has shown that energy costs can account for around 30% of the total infrastructure costs of a data centre. But what should data centre managers include to make most effective use of all data centre power and cooling infrastructure? We reviewed the EU Code of Conduct for Data Centres and developed the quick self-assessment to help you get started. Special thanks to Data Centre Alliance for inspiring the series.
We see four pillars to any assessment. They cover the span of power and cooling, and include design, technology, utilisation and monitoring.
Data Centre Planning, Utilisation & Monitoring
It is important to develop a holistic strategy and management approach to the data centre. This will enable the data centre operator to effectively deliver reliability, economic, utilisation and environmental benefits.
- Involvement of Organisational Groups Ineffective communication between the disciplines working in the data centre is a major driver of inefficiency as well as capacity and reliability issues.
- General Policies – These policies apply to all aspects of the data centre and its operation.
- Energy Use and Environmental Monitoring The development and implementation of an energy monitoring and reporting management strategy is core to operating an efficient data centre. Most data centres currently have little or no energy use or environmental measurement capability; many do not even have a separate utility meter or bill. The ability to measure energy use and factors impacting energy use is a prerequisite to identifying and justifying improvements. It should also be noted that measurement and reporting of a parameter may also include alarms and exceptions if that parameter passes outside of the acceptable or expected operating range.
Cooling Design and Management
Cooling of the Data Centre is frequently the largest energy loss in the facility and as such represents a significant opportunity to improve efficiency.
- Air Flow Management and Design – The objective of air flow management is to minimise bypass air, which returns to the CRAC units without performing cooling and the resultant recirculation and mixing of cool and hot air increasing equipment intake temperatures. To compensate, CRAC unit air supply temperatures are frequently reduced or air flow volumes increased, which has an energy penalty. Addressing these issues will deliver more uniform equipment inlet temperatures and allow set points to be increased (with the associated energy savings) without the risk of equipment overheating. Implementation of air management actions alone does not result in an energy saving – they are enablers which need to be tackled before set points can be raised.
- Cooling, Temperature & Humidity Management – The data centre is not a static system and the cooling systems should be tuned in response to changes in the facility thermal load. Facilities are often overcooled with air temperatures (and hence chilled water temperatures, where used) colder than necessary resulting in an energy penalty. Increasing the set range for humidity can substantially reduce humidifier loads.
The cooling plant typically represents the major part of the energy used in the cooling system. This is also the area with the greatest variation in technologies.
- Free and Economised CoolingFree or economised cooling designs use cool ambient conditions to meet part or all of the facilities cooling requirements hence compressor work for cooling is reduced or removed, which can result in significant energy reduction. Economised cooling can be retrofitted to some facilities. The opportunities for the utilisation of free cooling are increased in cooler climates and where increased temperature set points are used.
- High Efficiency Cooling The next preference cooling technology is the use of high efficiency cooling plant. Designs should operate efficiently at system level and employ efficient components. This demands an effective control strategy which optimises efficient operation, without compromising reliability.
- Computer Room Air Conditioners – The second major component of most cooling systems is the air conditioner units within the computer room. The computer room side of the chiller plant is frequently poorly designed and poorly optimised in older facilities.
Optimised Power Equipment Usage and Data Centre Layout
The other major part of the facility infrastructure is the power conditioning and delivery system. This normally includes uninterruptible power supplies, power distribution units and cabling but may also include backup generators and other equipment.
- Selection and Deployment of New Power Equipment – Power delivery equipment has a substantial impact upon the efficiency of the data centre and tends to stay in operation for many years once installed. Careful selection of the power equipment at design time can deliver substantial savings through the lifetime of the facility.
- Office and Storage Spaces – Energy is also used in the non data floor areas of the facility in office and storage spaces. There is discussion as to whether, to eliminate overlap this section should be removed and replaced with a pointer to BREEAM, LEED or EU standards for green buildings. These standards do not define the data centre part of the building.
- Building Physical Layout – The location and physical layout of the data centre building is important to achieving flexibility and efficiency. Technologies such as fresh air cooling require significant physical plant space and air duct space that may not be available in an existing building.