Data centres


The growth of the internet and cloud computing has caused a massive expansion in online data. This data is held in data centres, which have become fast-growing consumers of power. Although major energy users, data centres contribute substantially to energy savings elsewhere in the economy by enabling digitalisation, in transport, infrastructure and across industry

Leading IT companies are increasingly taking responsibility for the energy impacts of data centres through leadership in energy efficient, low-carbon design. However, 95% of data centres are old, small-scale operations. The average Australian data centre is now over 20 years old and many are inefficiently designed.

Operating data centres is expensive, and staying at the forefront of energy efficiency is the only way to keep costs down while maintaining reliability. Improving the performance of data centre systems and implementing greater management and control of their operation will result in numerous benefits including:

  • reduced energy costs and greenhouse gas emissions
  • enhanced productivity, reliability and security
  • improved market competiveness and environmental credentials.


To operate efficiently, there is a need to understand how and where power is being consumed by IT systems and whether or not the cooling capacity is correctly provisioned to the IT loads.

The National Australian Built Environment Rating System (NABERS) for data centres are a set of benchmarking tools for measuring the energy efficiency and environmental impact of data centres:

  • NABERS Energy for data centres (IT equipment) is for organisations who own or manage their IT equipment but do not control data centre support services such as air conditioning, lighting and security.
  • NABERS Energy for data centres (Infrastructure) is for data centre owners and managers. It allows them to determine their facility’s energy efficiency in supplying the infrastructure services to the IT equipment housed in a data centre.
  • NABERS Energy for data centres (Whole facility) combines both the IT Equipment and Infrastructure tools and is designed for organisations that both manage and occupy their data centre or where internal metering arrangements do not permit a separate IT Equipment or Infrastructure rating.

A useful metric used broadly for comparing data centre energy performance is power usage effectiveness (PUE). This provides a ratio of total facility power to the IT equipment power, with the ideal being a PUE of 1. Studies show that, on average, data centres have a PUE of 2.5, while state-of-the-art facilities aim for around 1.5. Effective metering of a data centre should be implemented to accurately understand the inputs and outputs of the facility, and to continuously monitor the PUE.

Make existing equipment more efficient

Maximising the efficiency of existing equipment can be an economical interim solution prior to a full upgrade. It’s also worth identifying and removing redundant servers to avoid wasting power and space.

Match loads to the efficiency range of equipment types. For example, fans are typically more efficient when running below full capacity, while compressors are most efficient at maximum capacity.

Ensure that cooling equipment is not over-powered for the task. Check that cooling is not being excessively triggered by overly responsive controls. Increase the allowable temperature range and avoid strict temperature control (further details below).

Prevent hot and cold air from mixing by using physical barriers and covering unused rack spaces with blanking panels. Efficient data centres tend to build corridors with walls and ceilings entirely isolating the hot server exhaust aisles from the cooler server inlet aisles. Blocked or dirty air ducts and poor airflow design mean more power is required for air to reach IT equipment.

Replace or add equipment

Replacing or adding new equipment can be an expensive prospect. Take the time to research the most energy-efficient equipment available when performing an upgrade and ensure that it will be efficient over a wide utilisation range, such as a complement of variable speed drives.

Consider installing economisers, which draw cool air from outside through heat exchangers, or free adiabatic cooling. This alleviates the requirement to run the compressors except on very hot and humid days. Similarly, water economisers use cool water from natural sources. Rear door heat exchangers can supply cool water directly to the back of the server rack which then cools the hot air exhausting from the server directly at the source. These are especially suitable where limited space results in high densities of equipment and in climates where economisers are not suitable.

Solid state drives (SSDs) have no spinning disks to power, and clear advantages in terms of reduced energy use, greater speed and reliability.

Retrofits of more efficient air-handling units are often viable.

Temperature setpoints

Higher operating temperature tolerances for IT equipment has lower cooling needs. Ensure the system has effective sensors in place, which are effectively communicating with the supply systems via a building management system (BMS).

The American Society of Heating Refrigeration and Air-Conditioning Engineers recommends that modern IT equipment (Class A1) can operate reliably at higher temperatures and has broadened the guideline IT operating temperature range out to 15°C to 32°C. Older equipment can typically tolerate temperatures up to 25°C.

Cooling system design

The performance of data centre cooling systems significantly affects the total energy consumption and CO2 emissions. There are various configurations of ventilation and cooling equipment, with many systems available for room control. Expert design advice is needed to optimise performance to the specific situation.

Accurately calculating heat gains from the design will assist in plant sizing. Don’t allow the selected cooling equipment to operate inefficiently at higher than required capacity. Consider evaporative cooling to maximise efficiency.

Where possible, make use of free cooling by leveraging outside temperatures, including implementing night purge cycles. Cool, filtered air from outside can be drawn in to replace hot air, which is extracted using large fans.

Consider water-side cooling which takes advantage of lower outdoor ambient temperatures in autumn, winter and spring to precool heat exchangers with returned water. Use high-efficiency, low-friction compressors, with variable speed drive (VSD) systems. Many energy efficiency measures such as aisle containment and free cooling depend on VSDs being installed to maximise their energy-saving potential.

Implement an optimised underfloor supply system or high-density in-row cooling solution. For underfloor cooling, calculating the ideal floor-grille balance for each rack can reduce the amount of fan power by 60%.

Use a modular approach to avoid initial oversizing, so that the infrastructure can progressively be added and switched on as demand increases.

Rack layout

For new data centres, optimising hot and cold aisle containment at the design stage makes it easier to run the right number of fans and optimise other equipment configurations. For existing centres, the layout of the racks should be reviewed and any excessively hot zones identified.

Hot zones need to be managed either physically or virtually and matched to the cooling system. Locate the cooling sources close to the IT equipment to reduce cooling system losses and consider alternate cooling solutions such as in-row coolers or IT cabinet rear door heat exchangers.

Underfloor supply systems rely on cold air supplied at low level, with hot air returned to the computer room air conditioning (CRAC) unit at high level. The hot air should be drawn along the top of the hot aisle, with minimal contact with the cold supply air at the rack front.

Mixing of hot and cold air should be minimised as much as possible through good rack layout, and by providing blanking plates at any empty sections of the rack.


Low server use remains one of the largest opportunities for energy savings in data centres. The traditional approach of providing a dedicated server to handle each application for the business is inefficient due to low utilisation.

Server virtualisation offers a way to consolidate servers by allowing multiple different workloads on one physical host server. A ‘virtual’ server executes programs like a real server, but multiple virtual servers can work simultaneously on one physical host server, leading to higher total utilisation.

Servers can be more efficiently used through improved data storage practices. Best practices include automated storage provisioning, data compression, deduplication, snapshots and thin provisioning. In the past, servers were allocated storage, based on anticipated requirements. Thin provisioning allocates storage on a ‘just enough’ basis by centrally controlling capacity and allocating space only as applications need it.

Power supply

A key step towards improving data centre efficiency and performance is to optimise the power supply. Power supply is composed of many components. Manufacturer’s operational data indicates equipment’s efficiency when in peak condition, but over time the efficiency decreases. Newer equipment in the same class will generally be more efficient.

A constant power supply must be ensured, even in the event of equipment failure. Resilience of the supply system is determined by the level of redundancy in place and the limitation of ‘single points of failure’.

Low carbon energy

In addition to energy efficiency measures, data centres can save money and reduce environmental impacts by investing in low carbon energy sources.

Where circumstances and space allows, onsite solar PV arrays are an effective way to reduce energy costs and greenhouse gas emissions. However, since the energy density of data centres is so high, it is sometimes not practical to use on-site renewables such as solar or wind. In such cases, power purchase agreements (PPAs) for offsite renewable energy can be a viable alternative.

Onsite generation with fuel cells or gas-powered generators may also be feasible. Tri-generation is especially worth considering, as the waste heat from the production of electrical power can be used to provide cooling via an absorption chiller.

Waste heat

Waste heat can be used directly or to supply cooling through the use of absorption or adsorption chillers, reducing chilled water plant energy costs by well over 50%.

The direct use of waste heat for low temperature heating applications such as preheating ventilation air or heating water for office spaces will provide worthwhile energy savings.

Waste heat absorbers use low-grade waste heat to thermally compress the chiller vapour to offset mechanical compression used by conventional chillers.

Room construction and building materials

Data centres don’t have the same amenity requirements as an office space, such as natural light and views, and so should be constructed with materials offering the greatest insulation against transmission of heat.

Solar heat gains through windows should be eliminated by applying an insulated barrier, and any gaps that allow unnecessary infiltration or internal air leaks need to be eliminated.


5 star NABERS rating a reality

The M1 Melbourne data centre has been certified as Australia’s first NABERS 5 star-rated data centre infrastructure facility, demonstrating the performance achievable from comprehensive attention to energy efficiency. Efficient design means PUE rating of 1.3 with sustainable free air-side cooling that reduces power consumption. The centre also has a 400kW solar PV rooftop array, believed to be the largest privately funded array in Australia.

High-temperature servers

New server designs with higher temperature tolerance can now be operated between temperatures of 5℃ to 47℃, offering substantially reduced cooling requirements. Compared to conventional servers, high-temperature energy-saving servers provide greater reliability, low power consumption, and ease of deployment.

Efficient chilled-water cooling system settings

Most heat in a data centre is sensible heat with very little latent heat, meaning there is very little demand for de-humidification. The incoming water temperature for a data centre cooling unit can be set substantially higher than would be the case in human-dense environments, resulting in improved energy efficiency and more energy savings from the air conditioning.

Power and infrastructure management tools

Centralised power management software and/or power-saving features embedded in the server hardware can automatically reduce power supply to IT equipment when not needed. Data Centre Infrastructure Management (DCIM) tools monitor, measure, manage and control data centre utilisation and energy consumption of all IT-related equipment and facility infrastructure components. DCIM can increase efficiency by helping managers identify opportunities for improved server utilisation, as well as highlighting faults or inefficiencies in the supporting systems.

Read more

Data Centres NABERS, NSW Government on behalf the of Australian Government and state and territory governments

Data Center IT Efficiency Measures Evaluation Protocol (PDF 778KB) US National Renewable Energy Laboratory

Data Center Efficiency Assessment (PDF 485KB) US National Resources Defense Council

Energy Efficiency Best Practice Guide Data Centre and IT Facilities US Energy Star

Energy Efficiency Policy Options for Australian and New Zealand Data Centres (PDF 2.45MB) Energy Rating

Improving Data Centre Infrastructure Efficiencies AG Coombs

12 Ways to Save Energy in Data Centers and Server Rooms US Energy Star