Virtualization vendors may be promoting the potential of putting 20, 50, or even 100 virtual machines on a single physical server, but IT managers and industry experts say these ratios are dangerous in a production environment, causing performance problems or worse disruption issues.
In the test and development environment, companies can put 50 virtual machines in a single physical host, said Andimann, vice president for research at Colorado State Boulder's Business Management Association (EMA). However, when it comes to critical tasks and resource-intensive applications, the number of virtual machines will fall below 15.
In fact, EMA surveyed 153 institutions with 500 final customers in January 2009 and found that they averaged a 6:1 consolidation rate in applications such as ERP, CRM, e-mail, and databases.
The gap between reality and anticipation (whether it's a vendor hype or an internal return on investment) can cause problems for IT teams. That's because the consolidation rate only affects every aspect of the virtualization project: Budget, capacity, and the acceptance of business officials. ' If you implement these virtualization projects with false expectations, you'll get into trouble, Mann says.
' Indeed, overestimating the ratio of physical servers to virtual servers can lead to more server hardware, power consumption, heat and cooling, and rack space, ' said Charlesking, president and chief analyst at Pund, a consultancy at Hayward, Calif. All this costs money. Worse, poorly performing applications can affect users. If a business thinks they need only 10 servers at the end of a virtualization project, and they actually need 15 servers, this will have a significant impact on the overall cost of consolidation and plunge them into the financial crisis. This is not a good thing, especially in the current economic situation. Here are some tips that analysts discuss to avoid server overload.
Critical applications will scramble for applications in the server domain
So why break the link between virtualization expectations and reality, King says? He said the key. Many enterprises focus on the virtualization of low-end, low usage, low input/input applications such as testing, development, logging, file and print servers. When it comes to network edges and unimportant applications that do not require high availability, you can put dozens of virtual machines in one server.
Bobgill, general manager of the consulting firm Theinfopro, who is responsible for server research, agrees. He said people were early in the system where the virtualization utilization was less than 50%. There are also applications that will not be cared for if they stop running for one hours.
This is not the case when applying virtualization to mission-critical, resource-intensive applications. Some people say that virtualization vendors have not explained this reality to users.
' Once you start to deal with applications with high utilization, greater security risks, and increased performance and availability requirements, the percentage of consolidation will drop significantly, ' says King. These applications will compete for bandwidth, memory, processors, and storage. Even on servers with two quad-core processors, applications that already have a heavy workload with virtualization technology will experience network bottlenecks and performance problems as these applications scramble for the same server's resource pool.
Starting with capacity analysis
To solve this problem, the IT team must redefine its own thinking and reduce everyone's expectations. The best starting point is capacity analysis, says Krisjmaeff, an expert on information security systems at the Inland Health Bureau (Interiorhealth), one of the five health authorities in British Columbia, Prov., Canada.
Four years ago, the data center of the Inland Health Bureau was growing fast. There are many requirements for virtualization, a production environment with 500 of servers, to support many services, including DNS, active directories, Web servers, FTP, and many production applications, as well as database servers.
Before you start implementing virtualization, Jmaeff first uses VMware tools for an in-depth capacity analysis. This analysis monitors server hardware utilization. Companies such as Cirba, Hewlett-Packard, Microsoft, Platespin and Vizioncore also offer similar tools. Instead of looking at every piece of hardware, Jmaeff sees everything as a pool of resources. He says capacity planning should focus on resources that the server can contribute to the virtual pool.
The team has consolidated 250 servers (50% of the total number of servers) into 12 physical hosts. Although the ratio of the entire datacenter virtual machine to the host is 20:1, the host that holds more demanding applications or needs a lower percentage or requires him to balance resource-intensive applications is Jmaeff.
Jmaeff used a method of combining Vmwarev center with IBM Director to monitor the proportional imbalance of each virtual machine, such as memory and processor usage peaking or performance degradation. ' We have to work hard to run these applications and adjust our conversion rates based on server resource requirements to create a more balanced workload, ' he said. Cloning the server and quickly dispersing the workload of the application is easy if necessary.
' We are very satisfied with our ratio because we have done the work on the virtual server scale by checking the workload of the processor and memory and evaluating the workload of the physical server, ' said Jmaeff.
Continuous monitoring is the key
In Networkdatacenterhost, a Web service provider at St. Clemente, Calif., the IT team soon learned that you had to take into account RAM memory surprises when it comes to virtualizing mission-critical applications. ' We thought we could have 40 small customers sharing a physical server based on available RAM memory, ' said Chief information Officer Shaunretain. However, we found that it is not RAM memory, but input/output, that is important for applications with large volumes.
He said the virtual machine must be back to a maximum of 20:1 in proportion to 40:1 of the mainframe. To help implement this effort, the team has written a control panel program that allows their customers to log in and see how their virtual machines handle reading, writing, hard disk usage, and other performance-affecting activities. In addition, Ndchost uses in-house-developed monitoring tools to ensure that this ratio is not compromised by the peak traffic of a single virtual machine.
Pund, King of the company, says companies should also rigorously test the mission-critical applications of virtualization before and after deployment. You must ensure that each application is consistently stable in terms of memory and network bandwidth. For example, if you know that an application is very large at some point in the year, you should consider this when you are building a virtual machine and a mainframe scale.
The test will also help the IT team determine which virtual workload is best to coexist on a single physical host. "You have to ensure that a physical server is not running multiple virtual machines that have the same workload," said Nelsonruest, co-founder of the British Columbia, Prov. Victoria Consulting agency Resolutionsenterprise and co-author of the virtualization: Getting Started Guide. Otherwise, if they are all Web servers, they will scramble for the same resources, hindering your consolidation. Instead, IT staff should be able to ensure a variety of workloads and a good balance based on peak usage times and resource requirements.
More Virtualization management skills
Ruest also warns that the IT team should not forget the additional resources that the host server needs. In this way, they can not only support their own virtual machines, but also accept workloads from failed hosts. If you run all your servers with a 80% match, you can't support the necessary redundancy.
Ruest says most organizations find that their capacity planning and testing phases take at least one months to determine the proper ratio of physical servers to virtual machines for their environment.
Finally, EMA's Mann warns IT teams to find companies with the same application environment at VMware's VMworld conferences, or at large annual conferences such as the Synergy Conference of the Citrix, or through local user organizations. Most participants are more likely to share information about their environment and experience. Instead of relying on vendor benchmarking, get a real-world example of what works and what doesn't work in your organization. In this way, you have a better chance of setting realistic expectations.
(Author: anon Editor: yuping)