Use the PerfMon tool to benchmark Windows Servers

Source: Internet
Author: User

Server performance cannot be determined by subjective consciousness. Even if a server is in good state, IT engineers also need a machine to measure, define the measurement criteria, and measure its performance. In almost every situation, benchmarks are used to measure and monitor server performance. This article provides an overview of server metrics and benchmarking.

Measure and benchmark servers

Server measurement standards and benchmarking techniques are not new concepts. In fact, they have been proposed many years ago and used to test some early computer systems. However, designing benchmark tests to measure server performance is a complete science. Our idea is to execute a simulated running process on the server's expected workload. In the execution process and timing. Then perform the same test on different systems and compare the results.

As the server architecture is advanced, it becomes more difficult to determine its performance in different computer systems through simple analysis. Therefore, measurement and benchmark testing began to emerge.

We use Windows Task Manager to check whether an application or process affects our memory or CPU usage. This is a measurement test, although at a very simple level. The problem with Windows Task Manager is that it does not explain how a machine works. The Hierarchical Cache subsystem, custom applications, customized hardware, massive databases, non-uniform memory, and simultaneous multi-thread processors have already produced a huge impact on the performance of modern computing systems.

Scientific server Performance Benchmark Testing

Server performance is usually not affected by one factor. Therefore, testing server performance should be similar to a scientific lab location. One of the best ways to test server performance is to use scientific methods in analysis. This method is a six-step process, including observation, preliminary assumptions, prediction, testing/control, and testing. The final result is a theory and conclusion. The conclusion is that the best evidence can be collected in the running test set. The best and minimum server performance levels are also evidenced in this process.

1. Observation: Let's assume that the system administrator has purchased a server. Now let's look at its best performance. The first step is to determine the expected server task. Will it run a dedicated application as a virtual platform? After you confirm these problems, you can start the benchmark test. Make sure that the measurement standards and benchmarks are changed based on the test content and the devices used. For example, a database system may emphasize processor testing, while a network service system may highlight network performance.

2. Hypothesis: at this step, the engineer sets a benchmark goal. Suppose what or what test needs to be done? Simply performing a measurement test will produce some test results, but these results may be useless if there is no direction or clear goal. Create a basic objective for the test, and all the test methods are centered on this objective. For example, engineers may try to test the memory they occupy to make the application run at its best. He or she may therefore speculate that the "X" memory size can reach the optimal workload. This can be based on previous studies, benchmarks provided by suppliers, or other sources. Make sure that your assumptions are testable. That is to say, do not make a hypothesis that is only based on data but cannot be confirmed by the benchmark test.

3. prediction: Next, make a general prediction on the server benchmark test. Assume that the device is used as a dedicated application server. The system administrator can predict that the performance of the device will be improved by adding additional cores to the workload, and the performance of the application will also be improved. In some cases, engineers can even predict the ratio of improvement and want to pass benchmarking for verification.

4. Environment Control: Set variables. For example, you may need to allocate some cores to the server. At this time, the administrator should change only one setting at a time until he or she can accept the performance changes on this basis. Engineers may need to set the server to 6 GB memory and test its cooperation with other devices (CPU, image, hard disk, and associated devices ). Set different variables, including modifying the processor settings, but all other settings are in the initial state.

5. Test: After the variables are set, start the test now. Start testing from the baseline (a known start point) and systematically adjust server settings. Each test sequence has a result that is recorded for future reference. In this case, a test sequence can be seen as a hardware configuration change. Each application's new settings must be tested again and the results must be recorded. Once there are enough operation cycles, engineers should have a complete set of data to complete their inferences.

6. inferences and conclusions: Test and confirm the actual performance of the application and the expected resource or set performance. For example, determine the best running effect of an application after only half of the expected cores. From this point on, determine the best performance provided to the server by combining the core with other current variables (required memory size, number of currently running applications, software upgrades/service packages, etc. Note that any variable changes require further experiments.

The concept of server performance benchmark testing is very simple, but how to conduct Benchmark Testing and obtain valuable data is another thing. Microsoft's Performance MonitorPerfMon) is a flexible Benchmark Testing Tool, but its built-in support for a variety of counters and set parameters may make the test more complex, and even make the test results hard to explain. Through this article, we will introduce the most common counters of PerfMon in Benchmark Testing, and gain an in-depth understanding of how they affect actual testing.

Memory Allocation and general memory settings

If you allocate too much memory to an application, the performance of other processes on the server may be affected. In fact, improper memory utilization will negatively affect the overall system performance.

When using PerfMon for server Benchmark Testing, you can use the following counters to verify whether the memory allocation affects the overall server performance:

Memory: Available Bytes -- this counter displays the result of comparing the total Memory used by the operating system to the Memory required by server processes and applications.

Memory: Committed Bytes -- The results displayed by this counter change over time. You need to track records to learn about peak load activity for a period of time. You can track the time when the peak value and valley value appear in Committed Bytes to learn how the server runs. Make sure that the available memory is at least 4 MB or more than 5% more than the submitted memory committed memory.

Memory: Page Faults/sec-this counter records Page Errors generated when an application attempts to read data from a virtual Memory location marked as "not present. In most cases, 0 is the ideal measurement result. Any measurement value higher than 0 means a delay in response time. Remember, Memory: Page Faults/sec measurements are the sum of hard Page errors and soft Page errors. A hard page error occurs when a file needs to be obtained from the hard disk rather than the virtual memory. In contrast, a soft page error occurs in a resolved page error, and data is found elsewhere in the physical memory. Although there is an interrupt processor, the impact on performance is minimal.

Benchmark Test thread and Process Monitoring

Pay attention to several important counters of the processor, especially when you try to maximize the number of threads per CPU. Pay attention to the number of times "context switch context switches)" occurs.

Context switching occurs when the kernel or operating system core switches the process from one to another. To avoid context switching, the processor clears L1 and L2 caches and fills them in again. Cache refresh and refill will waste valuable time and reduce system performance.

Process: Thread Count: Inetinfo -- records the number of threads created by the Inetinfo Process and displays the latest value.

Thread: % Processor Time: Inetinfo => Thread # -- measure the total Time consumed by the Inetinfo process Thread.

Thread: Context Switches: sec: Inetinfo => Thread # -- measure the maximum number of threads per processor or Thread pool. Monitoring this counter is very important to prevent memory loss caused by excessive context exchanges. If the memory loss is too high, the advantage of increasing the thread will no longer exist. There is a balance point here, once the balance is broken, the system performance will not improve, but will decrease.

Measurement and Analysis of Benchmark Testing

Unfortunately, process and server metrics have a wide range of anomalies-we cannot give them one by one here. However, in most cases, system performance and indicator testing can be divided into the following categories:

  • Memory Management
  • Network Capacity
  • Processor performance
  • Disk Optimization

Test engineers should be able to use these groups to obtain reliable benchmark test results and use these values to improve and optimize the entire server environment.

Understand the challenges of Benchmark Testing

Precautions are usually attached to the benchmark test and indicator evaluation reports for any tests completed in the server environment.

1. Be cautious with the benchmark test results provided by suppliers. Suppliers tend to benchmark their products according to industry standards. This means that the official benchmarking documents or white papers may not apply to your environment. For example, assume that an IT manager plans to purchase a software to store the user data database on the server. Parameters show that the software runs stably on Windows Server 2008 and responds quickly. Although this sounds good, it may not be suitable for the current environment. For example, if the indicator is the result of testing by the supplier on an independent and enhanced configuration server, and your environment is a virtual machine that shares host resources, what will happen? Remember, suppliers aim to sell the software to you, so they will use some "cheating" techniques to make the benchmark scores look good. This improves written data, but may make things worse in the real environment. While large hardware and software vendors are doing this, some smaller sales companies are doing some work on the data. For example, a hardware device claims to be able to achieve VPN connections over the WAN and achieve an ideal speed, because the system has been tested and optimized. However, after the actual deployment is launched, the speed and performance of the device are significantly reduced by 20%-30%. Therefore, rigorous and due diligence is required for devices or software that are highly dependent and responsible for critical tasks.

2. Never focus on only one test index. When conducting server Benchmark Testing, multiple components are involved as much as possible. Do not focus only on one of the factors, such as CPU speed. By focusing on the behavior of various components on the server, engineers can better understand how the overall system runs in different environments, so that they can quickly locate and correct performance problems in the future.

3. Pay attention to the benchmarking service provider. If you are planning to outsource the benchmark and indicator tests, make sure you have done a thorough investigation. In many cases, even the most famous consulting company ignores or does not follow basic scientific methods. This includes, but is not limited to, sample sizes for small servers and applications, lack of variable control, limited repeatability of results, and numerical deviations on software hardware. Search for extreme values. For example, the SQL Server test shows that the value is higher than expected, which may be related to the hardware used in the test. Fuzzy hardware requirement definition is also a trap. If the manufacturer lists the hardware but does not provide any detailed list-such as dual-core CPU, 4 GB memory, MB graphics card-you need to pay extra attention to this. Every variable is important when looking at the tiny details of the benchmark test. In this case, which type of processor should be used? What kind of memory and what kind of video card need to be used? All these details are quite different.

The key lies in realizing that every environment is unique and has its own specific demand set. Using tools such as PerfMon for indicator testing is a continuous process involving a large number of parameters that can greatly affect the results of the test data. By planning the test plan and following rigorous scientific methods, the test administrator can more accurately assess the running status of hardware and software. If the process goes smoothly, the information provided by a good benchmark test analysis can greatly help improve the server architecture and performance.

About the author: Bill Kleyman, MBA, MISM, an avid technical expert with rich experience in the field of network infrastructure management. Its engineering experience includes deployment of large virtualized environments and design and implementation of commercial networks. He is currently the technical director of World Wide Fittings, which has branches in China, Europe and the United States.

Original article:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.