Understanding the need to reduce data centre PUE levels

Mark Awdas of Cannon Technologies discusses some of the power consumption challenges - and solutions to those challenges - that face modern data centre facilitators and managers

The rising price of energy - coupled with a rising understanding amongst management of the social responsibilities that companies have in reducing their energy consumption footprint - means that data centre owners, their clients and managers have been revisiting power consumption issues in a big way over the last few years.

In parallel with this, the data centre industry has developed a measure of how effectively a data centre uses its energy. Known as the PUE (Power Usage Effectiveness) this measure quantifies for what application and how much energy is being used. 

PUE is defined as the ratio of total amount of energy used by a computer data centre facility to the energy delivered to the computing equipment.

This is calculated by taking a measurement of energy use at or near the facility's utility meter. We then measure the IT equipment load after the power conversion, switching, and conditioning processes are completed.

According to The Green Grid (www.thegreengrid.org) - an industry consortium active in developing metrics and standards for the IT industry - the most useful measurement point is at the output of the computer room PDUs (Power Distribution Units). This measurement should represent the total power delivered to the server racks in the data centre.

According to data centre association the Uptime Institute, a typical data centre has an average PUE of 2.5 - this means that, for every 2.5 watts in at the utility meter, only one watt is delivered  to the IT load. The Institute estimates that most facilities can - using the latest (2014) technologies - achieve a 1.6 PUE using the most efficient equipment and best practice.

This ratio  can usually be achieved in most data centres using a relatively simple set of steps to boost the power efficiency levels, and which also have the advantage of generating a good ROI (Return on Investment) as far Capex (Capital Expenditure) is concerned.

The steps that can be taken will include the retirement of legacy hardware in order to significantly reduce the power and cooling requirements of the IT systems - and so create a greener data centre. 

It's worth remembering here that legacy hardware - once it has been suitably `scrubbed' of stored data (where appropriate) - can often be traded in with many vendors and their dealers.

So how does PUE work in practice? Well, in a data centre with a PUE of 2.5, supporting a 600W server actually requires the delivery of 1,500W to the data centre as a whole. 

Unfortunately, most organisations lack any power-consumption metering  which can break down usage at a level that allows them to gauge the results of their optimisation efforts. To help solve this problem, efforts to monitor energy use should start with the creation of a manufacturer's `power profile' for each rack in an existing data centre.

Each department with an IT facility - and not just within a data centre itself - faces their own separate challenges that can cloud (no pun intended) the power consumption and efficiency issue for the systems concerned.

For example, facilities staff can be struggling with limits on rack and floor space, power availability, and kit, whilst IT staff will be try to ensure they have sufficient processing power, network bandwidth and storage capacity to support their upcoming IT initiatives - as well as ensuring sufficient redundancy to handle system disruptions.

Although balancing the needs of these two processes may sound relative easy, their complexity is often compounded by the fact that - in the past - facilities staff and IT professionals have tended to treat their operational costs separately, spreading their overall costs across the organisations and making it difficult to assess their full impact.

Because of the operational differences that exist between facilities staff and their IT colleagues, it is clear that optimising data centre energy efficiency requires a high degree of careful planning.

This is in addition to the deployment of components such as power, cooling, and networking systems that can meet both current needs and also scale for future requirements - and so minimise TCO (Total Cost of Ownership) issues, both now and in the future.

The scalability issue is such that, when data centres reach 85 to 90 per cent of their power, cooling, space, and network capacity, organisations must seriously consider either expanding their existing data centre or building a new one - this is, we have observed, a difficult strategic decision that can have a major impact on the company's bottom line.

Adopting a green strategy

The good news, however, is that adopting a `green strategy' can show how best practice for capacity expansion can increase the energy efficiency of a data centre - and also help to increase density, reduce costs, and extend the life expectancy of existing data centres.

In a green data centre, the mechanical, electrical, and spatial elements (facilities) - as well as servers, storage, and networks - are usually designed for optimal energy efficiency and minimal environmental impact.

The first step in energy-efficiency planning involves measuring  existing energy usage. It's worth noting that the power system in a given data centre is a critical element in the facilities infrastructure, so knowing where that energy is being used - and by which equipment - is essential when creating, expanding, or optimising a data centre.

As energy costs continue to rise, it is clear that aligning the goals and requirements of business, facilities, and IT departments will become more critical to optimising overall energy use and reducing the power costs in enterprise data centres.

Following the strategies outlined in this article - including the processes of monitoring current energy usage, retiring idle servers, and deploying energy-efficient virtualised servers - can help enterprises take a major step toward the realisation of a green data centre.

In many data centres, between 5 and 15 per cent of servers are no longer required and can usually be turned off. The cost savings from retiring these idle servers can  be considerable

Average server performance has also increased - today's servers are far more powerful than those of a decade ago, and virtualisation allows enterprises to take advantage of that performance to consolidate multiple physical servers onto a single virtualised server. It is worth noting that server upgrades can also help in this regard.

One of the pivotal moments in the evolution of data centre efficiency was the introduction of version 1.0 of the European Commission's `Code of Conduct on Data Centres Energy Efficiency' (http://bit.ly/1luw7kK) back in 2008.

In many ways the publishing of this code was something of a wake-up call for the data centre industry - and has helped to generate a better industry understanding of the need to `go green' where data centres are involved.

The Green Grid, however, has not rested on its laurels, as last year the IT/energy industry association teamed up with ASHRAE - formerly known as the American Society of Heating, Refrigerating and Air Conditioning Engineers and which has re-positioned itself as a sustainability association - to publish a review of the PUE standard.

Entitled `PUE: A Comprehensive Examination of the Metric,' (http://bit.ly/1eo5o4E) this is the 11th book in the Datacom Series of publications from ASHRAE's Technical Committee 9.9. 

Its primary goal, says ASHRAE, is to provide the data centre industry with unbiased and vendor neutral data in an understandable and actionable way.

At the time of the book's publication, John Tuccillo, chairman of the board for The Green Grid Association, said that data centres are complex systems for which power and cooling remain key issues facing IT organisations today

"The Green Grid Association's PUE metric has been instrumental in helping data centre owners and operators better understand and improve the energy efficiency of their existing data centres, as well as helping them make better decisions on new data centre deployments," he explained.

Conclusions

As energy costs continue to rise, it is clear that aligning the goals and requirements of business - as well as facilities and IT departments - is now critical to optimising energy usage and so reducing power costs in enterprise data centres.

Our broad recommendations to help reduce these costs - as well as optimising  the power consumption for all types of data centres - is to closely monitor a centre's current energy usage, as well as retiring idle servers and deploying energy-efficient virtualised servers wherever possible.

Our observations also suggest that, if you are involved in the management or operation of data centres, then the PUE ratio will matter to you. In view of this, you should also be looking at reducing the power consumption of the data centre and so improve your facility's benchmark along the way.

The human element in the data centre power efficiency stakes should also not be ignored - especially in today's facilities management arena. Vendors and data centre staff should always be able to advise clients on how to reduce temperatures and energy usage using technologies such as innovative hot- and cold-aisle designs.

Since the UK Carbon Reduction Commitment (CRC) obligations were enacted back in April 2010 (http://bit.ly/1luwLPb), it should be clear that vendors and data centre providers need to work together in developing industry standards and ratings that work.

Cannon Technologies believes that the data centre industry - from the power suppliers all the way to the rack makers - needs to work together to improve efficiencies and so ensure that we are all at the forefront of efficient and green data centre operations.