Data centres
[toc]
The growth of the internet and cloud computing has caused a massive expansion of accessible information, held in data centres.
Despite being heavy consumers of energy, data centres contribute substantially to energy savings in the economy. This is achieved by enabling digitalisation across infrastructure and industry in general.
Major IT companies are taking responsibility for the energy impacts of data centres through energy efficient design. However, the average Australian data centre is now over 20 years old and many are inefficiently designed.
Staying at the forefront of energy efficiency is the only way to keep costs down while maintaining reliability. There are many opportunities available for system improvement and significant energy savings.
Benchmarking
To operate efficiently, assess where power is being consumed by systems and if the cooling capacity is correctly aligned to IT loads.
National Australian Built Environment System (NABERS) for data centres measures the energy efficiency and environmental impact of data centres. It is available in three rating streams:
IT equipment
This rating is for organisations who manage their IT equipment but don’t control building amenities such as air conditioning, lighting and security.
Infrastructure
This rating is for data centre owners and managers. It allows self-determination of a facility’s energy efficiency in supplying infrastructure services to IT equipment.
Whole facility
This rating combines the IT Equipment and Infrastructure tools. It is for organisations that manage and occupy their facility, or where internal metering arrangements don’t permit separate IT equipment or Infrastructure ratings.
A useful metric for comparing data centre energy performance is power usage effectiveness (PUE). This provides a ratio of total facility power to the IT equipment power, with the ideal being a PUE of 1. Globally, the average annual PUE reported in 2022 was 1.55.
Effective metering of a data centre should be implemented to understand inputs and outputs of the facility, and to continuously monitor the PUE.
Optimise existing equipment
Maximising the efficiency of existing equipment can be an economical solution prior to a full upgrade. It’s also worth removing redundant servers to avoid wasting power and space.
Match equipment to task
- Match loads to the efficiency range of equipment types. For example, fans are typically more efficient when running below full capacity, while compressors are most efficient at maximum capacity.
- Ensure that cooling equipment is not over-powered for the task. Check that cooling isn’t being excessively triggered by overly responsive controls.
- Increase the allowable temperature range and avoid strict temperature control.
Airflow
Blocked air ducts and poor airflow design mean more power is required for air to reach IT equipment.
- Prevent hot and cold air from mixing by using physical barriers and covering unused rack spaces with blanking panels.
- Design corridors with walls and ceilings isolating hot server exhaust aisles from the cooler server inlet aisles.
Replace or add equipment
Replacing or adding new equipment can be expensive so research the most energy-efficient equipment available.
Some other things to consider:
- Ensure it will be efficient over a wide utilisation range, such as a complement of variable speed drives (VSDs).
- Consider installing economisers, which draw cool air from outside through heat exchangers, or free adiabatic cooling. This alleviates the need to run compressors except on very hot and humid days.
- Solid state drives (SSDs) have no spinning discs to power an advantage for greater speed, reliability and reduced energy use.
- Retrofits of more efficient air-handling units are often viable.
Water economisers
Water economisers use cool water from natural sources. Rear door heat exchangers can supply cool water directly to the back of the server rack which then cools hot air from the server directly at the source.
This is especially suitable where space is limited, and in climates where economisers are not suitable.
Cooling system design
The performance of data centre cooling systems affects total energy consumption and emissions.
There are various configurations of ventilation and cooling equipment available. Expert design advice is needed to optimise performance to the specific situation.
Other things to consider when designing the cooling system:
- Calculating heat gains from the design will assist in plant sizing. Don’t allow cooling equipment to operate at higher than required capacity.
- Consider evaporative cooling to maximise efficiency.
- Where possible, make use of free cooling by from outside, including night purge cycles. Cool, filtered air can be drawn in to replace hot air, which is extracted by fans.
- Consider water-side cooling which takes advantage of lower outdoor ambient temperatures in autumn, winter and spring to precool heat exchangers with returned water.
- Use high-efficiency, low-friction compressors with VSD systems. Methods such as aisle containment and free cooling depend on VSDs to maximise their energy-saving potential.
- Implement an optimised underfloor supply system or high-density in-row cooling solution.
- For underfloor cooling, calculating the ideal floor-grille balance for each rack can reduce the amount of fan power by 60%.
- Use a modular approach to avoid initial oversizing. The infrastructure can be added and switched on as demand increases.
Rack layout
Dense-packing a rack layout with servers may be efficient in terms of space, but the incremental cooling it will require can become a significant energy cost.
- For existing centres, layout of racks should be reviewed for areas of excessive heat.
- For new centres, optimise hot and cold aisle containment at the design stage. This will make it easier to run the right number of fans and optimise other equipment configurations.
- Hot zones need to be managed either physically or virtually and matched to the cooling system.
- Place cooling sources near IT equipment. Consider in-row coolers or IT cabinet rear-door heat exchangers.
- Underfloor supply systems rely on cool air at low level, with hot air returned to the computer room air conditioning (CRAC) unit at high level. Hot air should be drawn along the top of the hot aisle, away from the cold supply air at the rack front.
- Mixing of hot and cold air should be minimised as much as possible.
- Include blanking plates in empty sections.
Temperature setpoints
Higher operating temperature tolerances for IT equipment has lower cooling needs. Ensure the system has effective sensors in place, communicating with the supply systems via a building management system (BMS).
The American Society of Heating Refrigeration and Air-Conditioning Engineers recommends that modern IT equipment (Class A1) can operate reliably at higher temperatures. It has broadened the guideline IT operating temperature range from 15°C to 32°C. Older equipment can typically tolerate temperatures up to 25°C.
Servers
Low server use remains one of the largest opportunities for energy savings in data centres. The standard approach of a dedicated server for each business application is inefficient due to low utilisation.
Server virtualisation consolidates servers by allowing multiple workloads on one physical host server. A ‘virtual’ server executes programs like a real server, but multiple virtual servers can work simultaneously on one physical host server.
Servers can be optimised through improved data storage procedures.
Best practices include:
- automated storage provisioning
- data compression
- deduplication
- snapshots
- thin provisioning.
In the past, servers were allocated storage based on anticipated requirements. Thin provisioning allocates storage on a ‘just enough’ basis, as applications need it.
Power supply
A key step in improving data centre efficiency is to optimise the power supply.
Power supply is composed of many components. Manufacturer’s operational data indicates the efficiency when in peak condition, but this decreases over time. Newer equipment in the same class will generally be more efficient.
A constant power supply must be ensured, even in the event of equipment failure. Resilience of the supply system is determined by the level of redundancy in place and the limitation of single points of failure.
Low-carbon energy
In addition to energy-efficiency measures, data centres can save money and reduce environmental impacts by investing in low-carbon energy sources.
Where circumstances and space allows, onsite solar PV arrays are an effective way to reduce energy costs and greenhouse gas emissions. However, since the energy density of data centres is so high, it’s sometimes not practical to use on-site renewables such as solar or wind. In such cases, power purchase agreements (PPAs) for offsite renewable energy are a viable alternative. Companies are also investigating more novel options, including energy storage solutions.
Onsite generation with fuel cells or gas-powered generators may also be feasible. Tri-generation is worth considering, as waste heat can be used to provide cooling via an absorption chiller.
Waste heat
Waste heat can be used directly or to supply cooling via absorption chillers. This reduces chilled water plant energy costs by well over 50%.
Waste heat can be used directly for low temperature heating applications. These include preheating ventilation air or water for office spaces.
Room construction materials
Data centres don’t have the same amenity needs as an office space, such as natural light and views. Centres should be constructed with materials offering the greatest insulation against transmission of heat.
Solar heat gains through windows and gaps should be stopped with strip insulation or sealant.
Innovations
5 star NABERS rating a reality
NEXTDC's M1 Melbourne data centre has been certified as Australia’s first NABERS 5 star-rated data centre infrastructure facility. Efficient design means a PUE rating of 1.3 with sustainable free air-side cooling that reduces power consumption. The centre also has a 400kW solar PV rooftop array.
High-temperature servers
New server designs with higher temperature tolerance can be operated between 5℃ to 47℃, offering substantially reduced cooling requirements. High-temperature energy-saving servers provide greater reliability and ease of deployment.
Efficient chilled-water cooling system settings
There is very little latent heat in data centres, meaning little demand for de-humidification. The incoming water temperature for a cooling unit can be set much higher than for heavily populated offices. This results in more efficient air conditioning.
Close coupled fluid cooling
Innovators are turning to specialised refrigerant fluids to provide greater cooling and system performance. Liquid is passed through computer components via tiny heat exchanger channels, or components are immersed in a non-conductive dielectric fluid.
Underwater and cold climate data centres
Cooling is essential for efficient data centre operation. Situating data centres in cold regions is a favourable option. Microsoft has favourably assessed the feasibility of underwater data centre units, with the surrounding water for coolant.
Power and infrastructure management tools
Centralised power management software and/or power-saving features embedded in the server hardware can reduce power supply to IT equipment when not needed.
Data centre infrastructure management (DCIM) tools monitor, measure, manage and control data centre utilisation. DCIM also monitors energy consumption of IT equipment and infrastructure components, highlighting any inefficiencies in the system.
DCIM can help identify opportunities for improved server utilisation and highlight inefficiencies in the supporting systems.
5G and AI
Data centres are set to rapidly evolve with the rollout of 5G technology. In energy management, 5G is enabling greater IoT integration and AI optimisation of power management and network utilisation.
Read more
Data Centres NABERS
Data Center IT Efficiency Measures Evaluation Protocol (PDF 778KB) US National Renewable Energy Laboratory
Data Center Efficiency Assessment (PDF 485KB) US National Resources Defense Council
Energy Efficiency Best Practice Guide Data Centre and IT Facilities US Energy Star
Top 12 Ways to Decrease the Energy Consumption of Your Data Center (PDF 2.02 MB) US Energy Star