-->

KMWorld 2024 Is Nov. 18-21 in Washington, DC. Register now for Super Early Bird Savings!

Reducing data center power consumption

They installed newer blade server technologies that are more energy efficient when running and also provide flexible power-down of unused CPUs and other resources. Cable trunks were moved from floor to ceiling to optimize space and increase airflow under the raised floor. Server racks were arranged front-to-front and back-to-back to optimize airflow and conserve cooling capacity. Separating hot and cold aisles is more efficient: Since chilled air doesn’t mix with hot air in the aisles, the inlet temperature only has to be chilled to a moderate 22 to 24 C (in some climates, even outside air may be cool enough). While the current installation uses an air-cooled medium density cabinet design, Blumberg prepared for future growth and higher-density computing by installing a chilled water connection for adoption of water-cooled hardware racks later.

As a result of the changes, CLP improved its power usage effectiveness (PUE) ratio from 2.31 in 2007 to 2.03 in 2009 and 1.9 so far in 2010. That’s an estimated savings of 1 million kWh annually, which brings with it a 600,000-kg reduction in CO2 emissions. Although CLP both upgraded and expanded data center capacity, it actually reduced square footage from 7,000 to 4,000, thereby saving an additional $52,000 per year on rent. Its project won Gartner’s inaugural Green Data Centre Award for Asia Pacific in March 2010.

Having the conversation

Without improving the efficiency of energy consumption, many organizations, regardless of how mission-critical their data centers may be, are at risk of being unplugged, according to Bruce Taylor, symposium executive director of the Uptime Institute, which focuses on issues of data center reliability and availability.

“What we observed was that if we kept using power at the current rate, many data centers were simply going to run out of power. If you are in a region that has trouble meeting its power demands on the grid, you might have a problem. It can be really difficult in any metropolitan area today to build a data center that meets the requirements of a bank,” Taylor says.

However, he adds, “It isn’t just power outside the building—in the grid—it’s also power inside the building in the capacity to manage it. Data centers are designed and built with a power capacity in mind, but today’s servers are exceeding that capacity without people recognizing it.”

Unfortunately, he says, the worries that keep CIOs up at night may not mean much to the executives making decisions about facilities and real estate—even if that is the department paying the power bills. “You have a cultural problem in most organizations,” Taylor says. “The people responsible for the strategic planning around IT don’t necessarily have close communications and plan with the people responsible for facilities infrastructure. They are two different organizations with very different missions.”

Meanwhile, huge gains in data center efficiency are being made in both vendor and user communities. On the vendor side, Taylor says, “Everybody is working on it because it’s their lifeblood. HP has a huge data center initiative, so do IBM, Sun/Oracle and Dell. Each has data center services groups working internally on efficiency. Intel has a huge stake in a long-term roadmap for chip development and, in the meantime, has to help its customers be energy efficient and cut carbon.

For their part, user organizations are where the “rubber really meets the road,” Taylor adds. Amazon and Google are naturally two businesses that have to maximize their data center efficiency. Amazon has tuned that expertise into a side business in Web services, while Google’s narrow focus on search and advertising has such specific processing requirements that it can design its own data centers from the chip up. But that doesn’t mean that companies like Wal-Mart aren’t optimizing their data centers for all the multiple data streams that give them an effective supply chain.

Querying for efficiency at Google

Google claims to be operating the world’s most energy-efficient data centers, drawing only half as much electricity as the industry average, with a PUE ratio as low as 1.2, compared to most data centers, operating at an average PUE of 2.5. By designing its own power components, for example, Google doubled its efficiency. The company also removes unnecessary components from the dedicated servers, such as graphics processors. Server and rack fans run only as fast as necessary to maintain an optimum temperature, not a minimum, based on real-time demand. Evaporative cooling towers take over from compressor-driven air conditioners or from chillers in liquid cooling systems. With those and other measures, Google engineers designed and built data centers that operate at an average energy-weighted overhead of only 19 percent, compared to the EPA’s estimated industry average of 96 percent. Obviously, that has a huge impact not only on the company’s operating expenses, but also its environmental footprint.

KMWorld Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues