Reducing data center power consumption
When it comes to computing, power used to be an advantage: the calculating speed of processors. But now IT managers are equally focused on power as a liability: the energy bought and managed to run those processors.
By some estimates, data center energy consumption has almost quadrupled in the past decade, as more and increasingly powerful servers are brought online to answer queries, stream content, complete transactions and perform calculations and analysis in every sector of society and the economy. The increased use of “cloud” computing for virtual collaboration, location-independent productivity and software-as-a-service flexibility has dramatically escalated the trend and increased anxiety over a looming data center energy crisis.
This is a “green” problem in at least three ways:
- Green for the environment—Organizations are under intense pressure to reduce their carbon footprints, even if they are growing and prospering. That pressure isn’t just coming from environmentalists any more, but also from government, shareholders and customers.
- Green for the economy—Not only is electricity becoming a major expense in data centers, it is becoming the major expense for IT operations. And unfortunately, most of the energy is wasted on overhead processes, rather than the value-creating computing itself.
- Green for avoiding a stop light—Regardless of the cost or consequences of runaway power consumption, many data centers are simply running out of power capacity, based on the energy available in their facilities or even from their local grids, endangering the reliability of current operations and preventing growth or addition of new capabilities without investing in new facilities.
The nickname of “server farms” conjures images too bucolic for the industrial warehouses pumping watts, bytes and BTUs through stacks of as many as 15,000 servers—high-performance boxes often running their processors, memory chips and disk drives around the clock. Today’s data centers are not only storage for data, content and documents, but also virtualize the computing power to run operations and applications on that material.
In 2007, the federal Environmental Protection Agency’s Energy Star program replied to a Congressional query with a report on trends, needs and opportunities in data center energy issues. The report found that the increasing numbers of servers brought online to facilitate data processing, storage and networking in industries as diverse as finance, media, academia and government had effectively doubled the amount of energy being consumed in data centers between 2000 and 2006. They attributed 38 percent of the electricity use to enterprise-class data centers (the nation’s largest and fastest growing), while federal government data centers accounted for about 10 percent of the total. From that, the EPA projected another doubling by 2011.
In fact, by 2007 Gartner was estimating that data centers were accounting for almost a fourth of global CO2 emissions attributable to IT and by itself as much CO2 as the aviation industry. In 2008, a McKinsey & Company report noted that data centers were accounting for about a quarter of corporate IT budgets (including labor). Moreover, that electricity to run them had become the largest single expense, exceeding the cost of the servers themselves (over the typical equipment refresh rate of 3 to 5 years), suggesting to some a new business model similar to mobile phones or inkjet printers, whereby utilities sell servers at a discount and get rich supplying the power.
Making power productive
The problem is that very little of that power is considered productive—that is, being used directly by the servers’ computing components to convert data to useful information. Data center power consumption can be divided into the physical side—power supply, conversion, conditioning and cooling—and the logical side—server, storage and network. The physical side consumes one-third to one-half of the incoming electricity. But even on the logical side, an extremely low percentage of the power consumed actually does any work, experts say.
Most data centers are operating at only 5 percent to 10 percent efficiency. A common measure of energy use in data centers is power usage effectiveness (PUE). Most large organizations’ data centers operate with a PUE in the range of 2.5, which means for every 2.5 watts in, less than 1 watt is being used for any actual computing.
For example, much of the power goes to converting AC to DC voltage in the power supplies and regulating voltage on the motherboard (not to mention protecting its reliability with backup schemes), but those components are almost always designed for price, not efficiency. More non-productive energy has to be devoted to cooling the operating environment, especially given the amount of waste heat generated by today’s high-performance processors.
Such poor productivity makes it sound like IT departments are hopelessly saddled with obsolete equipment and facilities and that improving energy efficiency requires starting from scratch on a new state-of-the-art data center. But of course, in this economy most people would do anything to avoid building a new facility, and need to squeeze more efficiency out of their extremely inefficient legacy facilities. In reality, however, there are not necessarily two separate sets of problems and solutions. While the capital costs of renovating an existing data center may not yield all the long-term benefits as building a green field facility, there is plenty of low-hanging fruit in terms of how to improve efficiency within the physical infrastructure, especially in terms of cooling strategies, particularly on the cooling side.
Cutting power at CLP Power
First established in 1901, CLP Power Hong Kong (formerly China Light & Power) provides electricity to 2.3 million residential, commercial and industrial customers and is the principal subsidiary of CLP Holdings. As the group continues to expand throughout the Asia Pacific region, it anticipated a need to double the capacity of a data center housed in a 28-year-old building with limited floor space.
“We needed to think out of the box,” says Andre Blumberg, head of CLP Power’s group planning, control and IT operations. “Rather than investing a lot of money in a brand new facility, we decided to perform an in-place upgrade.” Blumberg admits it was a challenge to keep the IT equipment running 24/7 during the upgrade, but the decision saved $2.8 million on construction and spared the environmentally conscious company from having to dump 15 tons of demolition waste.
“In order to overcome the increasing floor space requirement, we opted for the latest generations of blades and virtualized storage,” he explains. “This required us to adopt a more efficient cooling system.”