VMWare ESX Server Performance optimization

Source: Internet
Author: User
Tags hosting server memory

VMWare ESX Server Performance Optimization

VMware ESX Server is the most popular virtual software PRODUCT on the Intel platform on the current server market. The biggest advantage of comparing other virtual software products, such as GSX server and Ms Virtual Server,esx, is that the resources consumed by the host system can be greatly reduced. The ESX server kernel runs directly above the hardware, and the system stability and performance are greatly improved.
ESX server is also well-suited for enterprise applications because it supports important redundancy features, such as multiplexing and link binding. ESX server is ideal for enterprise deployments due to support such as P2V vmotion and VirtualCenter software
Note This section is described in accordance with ESX Server 2.1. These methods are not necessarily common to other versions of ESX server.
Introduced
Tuning performance for large ESX server systems can be a daunting task. ESX server can host a very large load on the hardware. Depending on the amount of load and the number of virtual systems that are running. Some subsystems of the server may achieve performance bottlenecks. Therefore, it is very important to evaluate the design and configure the hardware system to ensure that there is no system bottleneck.
It is important to understand that the ESX server will simply virtualize your load, meaning that you actually need to adapt the ESX server system to the original plan. Virtualizing many of the underlying servers or terminal servers can have a huge impact on how you configure your system. It is also necessary to understand that ESX server is virtualizing the current hardware. You also need to adapt the system performance to different virtual operating systems.
Understanding VMware Performance Concepts
Before discussing tuning ESX server performance, it is important to understand the impact of virtualization on performance. ESX Server virtualization hardware, and provides an environment for running multiple virtual operating systems on a physical machine.
By default, for hardware access, such as hard drives, networks, each virtual machine has the same rights. You typically do not want to see a virtual system eat all the resources available from other virtual systems. For example, there is a server connected to a SAN storage that provides 250MBps of bandwidth. If the ESX server is installed, a virtual machine test output is created, typically only 25-50mbps output is so small, is not the ESX server performance is poor? Now, create another virtual machine, run the test at the same time, each virtual system can reach 25-50mbps, you can continue this test, until the ESX server kernel or SAN network bottleneck. As you can see from this example, ESX server is designed to consider parallel scalability as much as possible, rather than the high performance of a particular virtual machine. If your application does require a high-performance load for a subsystem, you should not put this application on the ESX server. However, if you have many server applications but each is not very high on I/O and CPU, using ESX server does save you the cost of hardware and software. Some applications run with performance comparable to local operations.
Hardware layout
When configuring ESX server on Xserver, you can have a rich selection of hardware, from 2-way machines to 16-way machine 64G of memory, and connect SAN storage. Therefore, a very good solution can be made, of course, the hardware configuration is to see their own needs to make sense.
For ESX server, the hardware subsystem is prone to bottlenecks: memory disk, network
Typical applications in virtual machines, such as ESX server, typically do not cause CPU bottlenecks. To avoid the creation of memory bottlenecks. Try to select a system with high front-end bus. ESX server operates frequently for CPU-to-memory and I/O-to-memory operations. They all use the front-end bus. In addition, try to increase the memory to avoid the impact performance of the swap partition. Note that the amount of memory required is determined by the application requirements running on each virtual machine.
Tip: If you configure 445 440, you need the same amount of memory on each SMP board. If different, it can affect performance.
Just as important as memory is the tuning of the disk system, which is especially important for the impact of the ESX server disk system. The disk hosting the ESX server kernel, as well as the kernel image, and the console system files should be equipped with RAID1 protection
We do not recommend using on-board LSI for raid, 5i 6i 6m cards are recommended because the onboard LSI has a slow CPU, no cache, no battery protection, and therefore is not recommended for use in production environments
For VMFS Storage, we recommend hosting with the best storage devices available. For example: With many disks for RAID, the more disks, the better performance, try to use 10k or 15k of hard disk. If using SCSI, use a u320 disk with a high-performance RAID controller, such as a 6m,ds4300 fiber controller. Use RAID10 to configure the maximum stripe size. For ServeRAID, use 64k Fibre Channel with 512k or 1MB
The block size used by the Vmfs file system is 1MB so try to match his size. If you use fiber storage such as telling the HBA card such as fc2-133. Configuring SAN storage itself is already a very complex task, but try to divide the ESX server into separate storage partitions. In general, tuning the disk system is a very complex and time-consuming task that requires analysis of utilization and system load to adjust performance.
Tip: If you are interested in deploying ESX server on ds4000, you can refer to the Red Book sg24-6434-00
For network configuration, we recommend that at least 2 gigabit NICs (not shared to the console system) network configuration is based on the network layout, for example, if the switch is 10/100m then configuring multiple 1000m NICs is meaningless. Depending on the network layout, it is a good idea to connect the ESX server to a high-speed switch and to support the NIC binding feature.
The CPU subsystem needs to be calculated based on all virtual machines plus an additional 10%-20% composite. Because ESX server supports up to 16 CPUs and can easily be deployed on 445, please note that you should not only consider performance tuning, you should also consider redundancy and other issues
Tip: ESX server supports Hyper-threading, and it is recommended to enable HT if the version is 2.1.2. However, if you are using 2.1.0, it is strongly recommended that you disable Hyper-Threading. It needs to be disabled in both the BIOS and the ESX server kernel.
VMware Disk Partitioning
In ESX server, you need to explicitly centralize different types of disk storage:
Storage of virtual systems for the storage of ESX server cores, swap files, day-to-file, and console systems
The virtual machine is running on a VMFS system, and the installation method, which is set by the ESX server by default, is generally appropriate and does not require more optimizations. The following is a typical disk storage layout (with SAN storage)
/DEV/SDA1 * 1 6 47974+ Linux/boot
/dev/sda2 7 325 2552319 Linux/
/dev/sda3 326 516 1528191, Linux swap
/DEV/SDA4 517 4442 31411926 F Extended Partition
/DEV/SDA5 517 4429 31307881+ FB VMFS
/dev/sda6 4430 4442 103981+ FC VMFS for core dump and swap
/DEV/SDB1 1 17681 142022601 fb VMFS for virtual machines
Note the size of the console swap partition is twice times the maximum recommended memory for the console. This allows you to add more memory to your server, and you can also set this partition to be twice times the actual application memory
On external storage, if the storage device is very large, it is recommended to configure an extra Vmfs file system. While some performance may be lost, there are other things that can be done if a VMFS is broken. Note that if more than one VMFS is divided on a small disk, the performance of the system degrades a lot because the heads move back and forth between the two VMFS systems.
Adjusting the console system
Because the console system is a very small redhat system, the console's adjustment space is also very small. Generally speaking, the console system does not need to be adjusted. The system is already running in 3 mode, and the necessary services have been started.
The only thing that can improve performance a little is to disable some virtual consoles. You can comment out Tty4 tty5/etc/inittab again Tty6
Example
# Run Gettys in standard runlevels
1:2345:respawn:/usr/sbin/vmkstatus tty1
2:2345:respawn:/sbin/mingetty Tty2
3:2345:respawn:/sbin/mingetty Tty3
#4:2345:respawn:/sbin/mingetty Tty4
#5:2345:respawn:/sbin/mingetty tty5
#6:2345:respawn:/sbin/mingetty tty6
Note that the installation of the IBM Director agent on the console system can affect performance because it is Java-based. If you must install the Director agent, add at least 50MB of memory to the console system. If there are other agents, add the memory accordingly. Although there are no more parameter configurations for the console system, there are some things to know if the ESX server hosts more than 60 virtual systems or if there is a heavy load. In this case, the console system is very slow, especially when hosting Web services. At this point, you can add the console's internal from capacity to 500MB-800MB in the management tool, taking into account the impact of IBM Director.
If VMware's management excuses are still slow, change the HTTP process priority County. Log in to the console system, check the process ID
PS-AXW |grep http
In the output, you can see that the ID of the httpd thread is 1431, and now you can adjust the priority permissions.
Renice-10-p 1431
[Email protected] root]# PS-AXW
PID TTY STAT Time COMMAND
1? S 0:03 Init
1431? S 0:00/usr/lib/vmware-mui/apache/bin/httpd-dssl
-dssl_only-dstandard_ports-desx-d/usr/lib/vmware-mui/apach
1166 pts/0 R 0:00 Ps-axw
[Email protected] root]# PS-P 1431
PID TTY Time CMD
1431? 00:00:00 httpd
[Email protected] root]# RENICE-10-P 1431
1431:old priority 0, new priority-10
This allows the httpd permissions to be adjusted, instead of 15 to reduce the priority of httpd
In addition to adding CPU time to httpd, you can adjust the memory reserved for the Web service, if there are 80 virtual systems, to adjust the default memory retention from 24MB to higher, you can adjust the actual shared memory, using the configuration file/etc/vmware/config, For example, from 24MB to 28MB as follows
Control.fullpath = "/usr/bin/vmware-control"
Wizard.fullpath = "/usr/bin/vmware-wizard"
Serverd.fullpath = "/usr/sbin/vmware-serverd"
Serverd.init.fullpath = "/usr/lib/vmware/serverd/init.pl"
# The setting below increases the memory shares available for the httpd
Mui.vmdb.shmSize = "29360128"
The Renice command takes effect immediately, but the expansion of memory requires a restart of the HTTPD service
Killall-hup httpd
To ensure that the console can be logged in under heavy load, it is recommended to increase the time between VMware Junction timeouts from 30 seconds to a higher value. Can be modified by/etc/vmware/config
Vmware.fullpath = "/usr/bin/vmware"
Control.fullpath = "/usr/bin/vmware-control"
Wizard.fullpath = "/usr/bin/vmware-wizard"
Serverd.fullpath = "/usr/sbin/vmware-serverd"
Serverd.init.fullpath = "/usr/lib/vmware/serverd/init.pl"
Mui.vmdb.shmSize = "29360128"
# The setting below increases the login timeout to 2 minutes
Vmauthd.connectionsetuptimeout = 120
You can also increase the memory limit of the vmware-service because this operation involves VMware threads, so you need to stop all virtual machines to complete. Modify/etc/vmware/config to increase soft memory from 64MB to 96MB
Vmware.fullpath = "/usr/bin/vmware"
Control.fullpath = "/usr/bin/vmware-control"
Wizard.fullpath = "/usr/bin/vmware-wizard"
Serverd.fullpath = "/usr/sbin/vmware-serverd"
Serverd.init.fullpath = "/usr/lib/vmware/serverd/init.pl"
Mui.vmdb.shmSize = "29360128"
Vmauthd.connectionsetuptimeout = 120
# The line below'll alter the soft memory limit
Vmserverd.limits.memory = "65536"
# The line below'll alter the hard memory limit
Vmserverd.limits.memhard = "98304"
You need to restart the Vmware-serverd service when you are finished editing
Shutdown-r now
Or
Killall-hup Vmware-serverd
Note: All virtual operating systems need to be shut down in advance
VMware Kernel Tuning
The VMware kernel has a number of options that can be adjusted to effectively affect the overall system performance. Here are some of the most important ESX server kernel parameter tuning
Paging file sharing
ESX server uses an algorithm to share equivalent memory pages between virtual machines, which can reduce the memory usage of the system. Page sharing has a small impact on the system and can even speed up page queries. The benefits of page sharing can be very much related to the system load.
We recommend allowing page sharing, but if you must disable page sharing to improve performance, you can modify the/etc/init.d/vmware file, add-M before-N, see example
disabling page Sharing-/etc/init.d/vmware
"CD" $vmdb _answer_sbindir "&&
"$vmdb _answer_sbindir"/"$kernloader"-m-n "$maxCPU"
"$vmdb _answer_libdir"/"$kernel" | | Exit 1 "
Prohibit page sharing, will increase the need for memory (virtual system is Linux less than the increase of window virtual system)
Set network speed
It is best to change the negotiation mode of all NICs on the ESX server from auto-negotiation to full duplex. All relevant switches should be set accordingly.
You can set the speed of the console card via/etc/modules.conf
Setting the network adapter speed-/etc/modules.conf
Alias Parport_lowlevel parport_pc
Alias Scsi_hostadapter aic7xxx
Alias Eth0 E100 e100_speed_duplex=4
Alias Scsi_hostadapter IPs
#alias eth1 eepro100
Alias Scsi_hostadapter1 aic7xxx
Alias Scsi_hostadapter2 aic7xxx
#alias Usb-controller USB-OHCI
Alias Scsi_hostadapter IPs
Alias Scsi_hostadapter IPs
The parameters of the specific settings can be see the network card driver Readme file
You can also use administrative excuses to set the network speed and duplex mode, root login to the management interface, set properties in the Network connection menu
Adjusting the QLogic card
QLogic HBA Cards increase queue depth, can greatly improve performance, the default queue depth value is 16, the test proves that if set to 64 can improve performance (actual queue depth may vary depending on the configuration)
Queue depth can be adjusted in the/etc/vmware/hwconfig file, search device.x.x.x.name = "QLogic Corp QLA2300 64-bit fc-al Adapter (rev 01)" (numbers may vary depending on the situation)
Device.7.3.0.class = "0c0400"
Device.7.3.0.devid = "2300"
Device.7.3.0.name = "QLogic Corp QLA2300 64-bit fc-al Adapter (rev 01)"
#下面添加队列深度
Device.esx.7.3.0.options = "Ql2xmaxqdepth=64"
#老版本ESX Server Add the following
Device.vmnix.7.3.0.options = "Ql2xmaxqdepth=64"
Device.7.3.0.subsys_devid = "0009"
Device.7.3.0.subsys_vendor = "1077"
Device.7.3.0.vendor = "1077"
NUMA Tuning
ESX server is well-supported for current NUMA systems, including x445, which is already well configured if the system has been optimized according to the hardware layout mentioned on the p329 page. However, if the server's load requires a specific CPU population, such as having a virtual server work on a specific NUMA node, such as an SMP board on 445, you can manually assign a specific NUMA node to a virtual server using the VMware management interface.
When 16-way 445 runs 64 virtual servers, you should allocate 16 virtual servers per 4-way SMP node, so that the memory-balanced allocation is optimally configured based on the physical location of the CPU-that is, a virtual server running on one NUMA node will not use memory on another NUMA node.
VMware Kernel Switching tuning
The kernel switching mechanism of the VMware kernel enables the running of very many virtual servers on a single machine. However, when the system starts to take advantage of the switching mechanism, disk I/O will have some increase in load.
To optimize performance, you need to closely observe the exchange files of the VMware kernel, and when the VMware kernel begins to write data to the hard disk using the switching mechanism, you should either reduce the number of virtual servers or install more memory. Usually in an ideal situation, do not allow the system to start using swap files when it is working properly. To minimize the impact of the swap file, it is recommended that the swap file be placed in the VMFS partition
Note: If you install ESX server on the blade of the IDE hard disk, you can only place the swap file on the external storage because the Vmfs file system does not support IDE devices
Caution: Pay more attention to the size of the exchange values in the/proc/vmware/swap/stats to keep him at 0
Tuning of the virtual server
Compared to the very good default VMware kernel parameters, tuning the virtual server can better achieve performance gains, depending on the load of the virtual server, some of the hints in this section can improve the performance of the virtual server to a great extent.
It is important to note that any performance tuning within a virtual server can be beneficial to the overall performance of the server as a whole.
Tip: It is recommended that you install VMware tools inside the virtual server, as well as the appropriate drivers, to improve performance and reduce the load on the entire ESX server
Adjust Virtual server memory location
When a new virtual server is created, it is required to select the size of the memory, just like installing a standalone server.
If you set up a number of memory for your virtual server, and the system and application need more memory, the swap file will be generated.
In general, the production of exchange files is very bad, compared to fast memory access, hard disk access is much slower, so it is recommended that the size of memory capacity, according to the operating system and the running application of the common needs to calculate. The System monitoring tool can be used to monitor the memory usage of the virtual server, public who can see p343
ESX Server provides many ways to adjust the allocation of memory
Memory can be allocated at any time according to the needs of the virtual server, and unused memory is shared with other virtual servers. Although it is easy to resize the memory, it is important to note that each time the adjustment is taken, the virtual server needs to restart the virtual operating system.
Virtual servers can create 2 types of virtual disk controllers. BusLogic is the default and features a very good compatibility and supports a wide range of operating systems.
BusLogic driver supports all guest systems, can operate 1kb small files, if your application does have a lot of such small files, this driver is really good, but this driver is not the best option to mention high performance, if you care about performance, it is highly recommended to use LSI driver, This can greatly improve performance, especially for large files, but many operating systems do not necessarily support this driver. VMware provides floppy images to accommodate Linux and Windows system add-on drivers.
Disable devices that are not in use
ESX Server provides a wealth of virtual hardware, but most of the time, many of these devices are actually useless, such as Apache service, CDROM only for the first time to install the system, after completely useless, serial port and the same port is not used at all. Windows will communicate with these devices over and over again. These actions can take up a lot of CPU time, and sometimes cause the system to slow down. Typically, you need to disable the following infrequently used devices: COM1 COM2 LPT1 cdrom (disable CDROM autorun)
Tip: Windows2003 can disable CDROM autorun in the following ways
Edit the Registry
hccu/software/microsoft/windows/currentversion/
Policies/explorer
Set NoDriveTypeAutoRun to be 0x000000ff
NIC Driver
ESX Server provides each virtual server's default network card type is AMD PCnet card, all customer operating systems support this network card, compatibility is good, compared with poor performance, but after the installation of Vmwaretool, because the new driver updated, so the network card performance will be greatly improved.
Tip: If the NIC encounters a problem, consider switching back to the old AMD network card to troubleshoot the error.
Adjusting the Terminal server
Virtual server When running many threads, such as a terminal server, you can get additional performance gains by stipulating the amount of load in the virtual server.
Adjusting the virtual server load requires opening the management interface and setting the server fan. This setting can speed up other servers running multiple synchronization threads, even if the server is set up as a terminal server.
After you change the settings, you need to restart the virtual operating system to make the configuration effective
Tip: If you set up ESX server 1.52 and 2.0.1, you need to open the configuration file to add it directly
Workload=terminalservices

VMWare ESX Server Performance optimization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.