A new buzz word, “Green”, is on everyones tongue recently, which is the result of a growing trend all over the world. For years groups like Clean Air Watch and the Sierra Club have advocated a change in modern societys habits in favor of a cleaner and “greener” Earth. The industrial world is embracing this trend with the hybrid car. For example, even Capitol Hill is moving toward a change with U.S. House Speaker Nancy Pelosis plan to “green” the Capitol complex.
While “Green” carries a specific connotation of minimizing energy consumption and/or carbon emissions, it also makes superb business sense in the IT community. IT systems, particularly for data centers are a significant consumer of electricity. The United States Environmental Protection Agency (EPA) has stated that data centers consumed 59 billion kilowatt hours (KWh) in 2006 ($4.1 billion) of which the federal government is responsible for 10% and it is estimated to increase up to 103 billion KWh by 2011. Many data center managers see the energy bill for operating the equipment and power consumption becomes a major concern for them. Servers and storage have developed into a very power-hungry element of data centers. Thus Data Center managers are beginning to embrace “Green IT” and “Green” data centers have become a significant factor in future data center design.
Procuring power efficient hardware is not just a smart business move for companies that are trying to save money, but it will soon become a mandatory shift when laws are passed to force such a change. Europe has already seen signs of this shift with the European Commission publishing the Directive 2005/32/EC on the eco-design of Energy-using Products (EuP) and with the recycling regulations that are already in place.
People around the world have grown socially aware, questioning the morality of political, social and business decisions taken. Thanks to the Internet, information has become easily accessible to everyone, which has allowed people to be more aware of their surrounding. Businesses and agencies are thus under a magnifying glass and every move they make is judged by society. Therefore, they do not want to conduct business in such a manner that would reflect a bad image. Since the hot topic of today is being environmentally friendly, it is crucial that companies adopt the “Green” attitude.
In the United States of America, this movement has been fueled by a society that has been advocating this movement. But soon enough government regulations in the US will be another driving force for IT companies to go “Green”. The European Union has already started on that track with the Restrictions on Hazardous Substances (RoHS) and Waste Electrical Equipment (WEEE) regulations. Although such regulations do not exist in the US, the Environmental Protection Agency (EPA) is working to include with the ENERGY STAR power consumption ratings hardware such as servers in its certification program.
As system components become faster and more effective, they also dissipate more heat. Chip manufacturers have focused their design on functionality and performance but not so much on heat efficiency. This allows more heat dissipation, while at the same time lowering the maximum acceptable temperature of operation for these chips. Such characteristics of the systems in use lead to a higher need for cooling per chip to avoid overheating and damage to the processors. The heat problem translates itself into a power and efficiency problem. These chips consume more power than their predecessors, so data center managers these days can only stack ten servers in the same rack that used to hold up to thirty servers. At the same time, these devices generate more heat, which equates to higher cooling power. Cooling a data center requires sophisticated and elaborate equipment that consumes power, exacerbating the power dilemma.
With the average cost of Kilowatt-Hours (KWH) in the U.S at around $0.092 in 2007 and $0.0892 in 2006, running a data center can be costly. Cooling accounts for a major portion of the energy bill, second only to the cost of running the equipment itself. The lack of focus toward designing an efficient data center has resulted in the need of between 0.5 and one watt to cool one watt of equipment when ideally managers would like to achieve a 0.3:1 cooling watt to equipment watt ratio. Making cooling efficiency the major concern in the design of a data center, and purchasing energy-efficient or “green” hardware becomes compelling.
Data centers energy bills are rising fast, its becoming a budget issue for the entire company. Managers need to find a way to reduce the energy cost while maintaining their high efficiency productions that business customers require these days.
Processor chips are designed to include the maximum computing power possible in the least space possible. This results in a need for more power and dissipating more heat per unit of equipment. It turns out that this strategy is less efficient because neither data centers nor the electric utility companies are able to provide enough power to the racks housing the hardware or to generate the necessary cooling that is required to compensate for the heat produced by the servers, switches and routers present in the data center.
Data center managers found out through experience that the key to efficiency was not physical space but power consumption. Unfortunately the chip manufacturing industry focuses the design mainly toward speed and not power efficiency. Throughout the years, semiconductor design has favored higher speed allowing higher leakage currents. Leakage current is wasted energy flowing through the junctions when the transistor is in the “zero state”. Estimations indicate that leakage current in high-end processors is between 18% and 20% of total power consumption.
Measuring the efficiency of a data center lies in measuring the ratio of systems per rack. But an average data center is equipped to handle racks powered up to 5-6 KW worth of equipment and its equivalent cooling power. With the hardware specifications discussed earlier, stacking racks to their full capacity will require them to be powered with approximately 25 to 30 KW per racks, which becomes a design issue and a critical point for vendors and data center managers.
The IT market is very competitive in terms of providing the best services, but what characterizes such a service is the speed and functionality of the equipment rather than the component efficiency. Power supplies are the main component that gets neglected in favor of such competitiveness. The consequences are a data center that consumes more power in power conversion and cooling than the computer systems actually need. This means that most of that power is wasted energy which we can work on saving by using energy efficient equipment. There is a classic cost tradeoff between a more efficient (expensive) power supply and the cost-savings over the life cycle.
In this discussion, we will present the common techniques and technologies used in todays data centers, we will then show why they are inefficient in terms of energy consumption. Once we have determined the problem we will give solutions for deploying and operating an energy efficient data center… Read more.