Today, cloud computing, large data, social networking, the network of four emerging trends in the rapid development, and greatly promote the construction of enterprise informatization pace. But enterprises in the enjoyment of these new trends brought about by the business opportunities and facilities, but also have to face the impact of the explosive growth of IT infrastructure such as servers: Server cabinets continue to increase, room space continues to expand, UPS power supply, room air-conditioning and other related facilities continue to increase input, information energy consumption quickly increased, server hardware and software management has become increasingly complex.
It's easy to build a data center, but it's not as easy as it seems to be. According to a server power of 200 watts calculation, it will consume 5 degrees a day, the General data center will have at least hundreds of servers, large or even thousands of, tens of thousands of units, these "eat electricity" server to consume the power is an alarming number. A group of statistics about servers and data centers all point to the same growing problem-power consumption.
Information shows that a large domestic communications companies nationwide data Center energy consumption of only one annual expenditure of up to 520 million. Energy use efficiency is low, power consumption is big, electricity expense exceeds infrastructure construction cost, occupy the second place of enterprise it cost, become one of the problems that the data center pays close attention to for a long time. How to reduce pue and TCO, become the core of cloud computing center, cloud Data Center building.
Virtualization is the most effective means of
Data center Energy consumption should be calculated, the prerequisite is to understand the energy consumption in the end "where."
In general, data center energy consumption consists of the following parts: server and other IT equipment accounted for 50%, refrigeration and air supply equipment accounted for 40%, the rest is ups and lighting and other equipment. Of course, energy saving from the big head "axe", first of all to consider how to reduce the server and other IT equipment energy consumption.
At present, reducing the server energy consumption means roughly the following: the use of low-power equipment, improve equipment operating temperature, virtualization integration. We analyze each of these:
Low-Power devices: This is the most direct method, because even if each server consumes less power, for a large data center, the energy savings are considerable. In recent years, in order to reduce the server power consumption, chip manufacturers and server manufacturers can be said to be the most of the way, from the system design, CPU, power, cooling, software control and other aspects of optimization, has achieved significant results. A decade ago, we often had a "Barroth" image of a server, but today a server is no more energy-efficient than a single PC.
But the pursuit of low power in the data center is endless, bosses are always asking, can the electricity bill be a little lower and a little lower? This has the CIO racking their brains, and the micro-server is jumping into the spotlight. The 2013 is a year of rapid development of micro-servers, Intel launched the Low-power moving processor "Avoton", Hewlett-Packard also released the "moonshot" micro-server, and arm has access to the data center. This year, the 64-bit ARM processor will be officially released, and the micro-server market will gradually prosper.
But the micro server is not a common server type, it has low power, high-density, high parallelism characteristics, more suitable for those lightweight, parallel load, such as mobile platform Web Access, cloud computing, etc., for more onerous such as database, ERP, CRM and other transaction-type load is not good. Intel had predicted that micro-servers would be able to account for the reprinting of the overall server market, which, while a huge market, was small in the broader general market.
Furthermore, although the industry has high hopes for arm, but arm is not a "civilian" products, not every company can afford to play, just came out of the arm micro-server lack of mature ecosystem, the lack of sound software support, users must undertake some research and development and transplant work, So arm is better suited for companies with large demand, strong research and development, and plenty of money, such as large internet companies.
So, although reducing the energy consumption of IT equipment such as servers is the most direct way to reduce the overall energy consumption of the data center, but this is a long-term process, need the chip dealer, server provider, and user's joint effort.
Then look at how to improve the operating temperature. By measuring the temperature of the data center, the energy consumption can be reduced by 4%. This is a viable method, the core idea is "change the environment rather than change", if you want to keep the data center running at a lower temperature, such as 18~20度, and a data center running at 25 degrees, it is necessary to build more refrigeration equipment, or consume more electricity to maintain low temperature.
Today, Super Cloud, Dell and other server manufacturers in the implementation of High-temperature server products, these servers can run at a higher temperature, thereby reducing the data center refrigeration energy consumption. This is a good idea, but it has its limitations. For a new datacenter, can completely purchase a full set of High-temperature server equipment, although the upfront hardware input compared to ordinary servers, but in the long run energy savings is obvious, operating costs will be lower; but for existing data centers, there are often different types of servers, different eras, different architectures, If the high temperature server and these servers mix, the advantages will not be reflected, because once the data center to upgrade the temperature, the original server may not function properly.
Again, virtualization consolidation. Virtualization is now almost the only way to become a data center, because the advantages of virtualization are obvious. According to statistics, most of the server can only achieve 10%~30% system processing capacity, most of the server load is less than 40%, most of the server processing capacity is not well utilized, resulting in a large amount of waste of server resources. With the increasing number of servers and the increasing difficulty of management, the application of unplanned downtime and increasing number of applications, system disaster recovery and data backup scenarios become increasingly complex. Installation configuration work becomes more complex and slow.
The advantage of virtualization is the ability to consolidate multiple low utilization servers into a handful of servers, reducing the number of data center servers, thereby saving energy and reducing consumption. Of course the benefits of virtualization and more than that, virtualization will pool resources, is the enterprise to achieve a private cloud of the first step, flexibility to provide computing and storage resources. Virtualization centralizes resources and enables more secure and manageable resource management.
In contrast to the first two approaches, virtualization is a more realistic way to achieve data center energy saving. This is true for both new data centers and older data centers, and virtualization is almost ready to support all types of load, both Internet applications and enterprise-critical applications. However, virtualization has a natural security problem, this will put all the eggs in a basket in the way or let a lot of people have concerns, when a single server can host more and more virtual machines, business critical business is gradually virtualized, this concern will be more intense.
What kind of server is suitable for virtualization?
So, what server do you want to choose as the hosting platform for virtualization?
Virtualization has a high demand for server memory, network I/O, and CPU core quantities, and a more common view is that multiple servers are more suitable for virtualization. HP's ProLiant DL580 G7, for example, can support up to 4 Xeon 75,008 cores (24MB L3 cache) with excellent processing performance, and its 64 DDR3 DIMM slots allow for a maximum memory capacity of 2TB and can accommodate more virtual machines; More I/O bandwidth from up to 11 PCIe 2.0 slots enables this server to meet the load requirements of most enterprise-critical application environments, including databases with higher memory and CPU requirements, business applications and virtualization, and a space-limited enterprise data Center environment.
Of course, the hardware specification is only one aspect, the solid security of the virtualization platform is the most important, which requires the server platform to have a higher RAS characteristics. Or take DL580 G7 as an example, this server has adopted a large number of modular, redundant, hot-swappable design, combined with the E7 platform Xeon Ras features, a good guarantee of the reliability of the server. And for demanding 7*24 hours uninterrupted, heavy load of the large database, you can make more than one DL580 G7 into a database cluster system, greatly improve the performance of parallel processing, usability and scalability, and thus avoid the traditional dual-machine program "expensive, standby resources in peacetime seriously idle waste, Host failover period User Service forced to pause "and many other difficulties.
Today, virtualization has spread across the data center, server virtualization, storage virtualization, network virtualization, I/O virtualization, all of which constitute a new trend-"software definition". "Software definition" is setting off a new revolution, in the near future, "software" or will defeat "hardware", become the leading data center, software defined data center, or will become the data center evolution of a new direction and trend.
(Author: bit net Editor: Li Xiangjing)