Reproduced Windows common performance counters (a good description)

Source: Internet
Author: User
Tags disk usage

Reprint Address: http://blog.csdn.net/dfbrt56/article/details/3341591

Windows Common Performance Counters

Performance counters (counter) are some data metrics that describe the performance of a server or operating system. Counter plays a key role in performance testing, especially in analyzing the scalability of the system and locating the performance bottleneck, the analysis of the counter value is very important. However, it must be stated that a single performance counter can only reflect one aspect of system performance, and analysis of performance test results must be based on a number of different counters.

Another term related to performance counters is resource utilization. This term refers to the use of various resources of the system. In order to facilitate comparison, the general use of "resources of actual use/total resources available" to form a resource utilization data for the use of various resources to compare.

Memory of performance test (Windows)

To monitor low-memory conditions, start with the following object counters:

· Memory/available Bytes

· Memory/pages/sec

Available bytes The remaining available physical memory in megabytes (reference: >=10%). Indicates the current number of bytes of memory that the process can use. Pages/sec indicates the number of pages fetched from disk due to hardware page faults, or the number of pages written to disk to free working set space due to a page fault .

If the value of Available Bytes is small (4 MB or less), the total memory on the computer may be insufficient, or a program does not release memory. If the value of PAGES/SEC is 20 or greater, you should study the page Exchange activity further. A large value of pages/sec does not necessarily indicate a memory problem, but may be caused by a program running a memory-mapped file.

The operating system often uses disk swapping to increase the amount of memory available to the system or improve the efficiency of memory usage. of the following four

The indicator directly reflects the frequency with which the operating system is disk switched.
Page faults/sec

When the processor reads a page in memory, an error occurs, which is the page Fault. If this page

Elsewhere in memory, this error is called a soft error, measured with transition fault/sec, and if the page is on a hard disk and must be re-read from the hard disk, this error becomes a hard error. A hard error can make the system run more efficiently and quickly. Page Faults/sec This counter represents the number of error pages processed per second, including hard and soft errors.
Page input/sec
Indicates the number of pages written to the hard disk in order to resolve a hard error (reference value: >=page reads/sec)
Page reads/sec
Represents the number of pages read from the hard disk in order to resolve a hard error. (reference value: <=5)
Pages/sec
Indicates the number of pages read from or written to the hard disk in order to resolve a hard error (reference value: 00~20)

The Available Bytes, Pages/sec, and Paging File% Usage must be monitored at the same time to determine if this happens. If you are reading a non-cached memory-mapped file, you should also see if the cache activity is healthy.

Cathe Bytes
File system cache (default is 50% of available physical memory)

Memory leaks

· Memory/available Bytes

· Memory/committed Bytes

If you suspect a memory leak, monitor the memory/available Bytes and memory/committed Bytes to observe the memory behavior and monitor the Process/private Bytes of the process you think might be leaking memory, Process/working Set and Process/handle Count. If you suspect a kernel-mode process is causing the leak, you should also monitor Memory/pool nonpaged Bytes, Memory/pool nonpaged Allocs and process (process_name)/Pool Nonpaged Bytes.

Private Bytes
The number of bytes that the process could not share with other processes. When the value of this counter is large, there may be a memory leak signal

Check for excessively frequent page exchanges

Because excessive paging uses a large amount of hard disk space, it is possible that there will be insufficient paging memory, which is easily confused with the disk bottleneck that caused the page exchange. Therefore, you must track the following disk usage counters and memory counters when studying the reasons for a less-than-obvious page exchange:

· Physical disk/% Disk time

· Physical Disk/avg.disk Queue Length

For example, include Page reads/sec and% Disk time and Avg.Disk Queue Length. If the page read operation rate is low and the value of% Disk Time and Avg.Disk Queue length is high, there may be a disk bottle diameter. However, if the queue length increases at the same time that the page read rate is not reduced, there is insufficient memory.

To determine the impact of excessive page swapping on disk activity, increase the value of the physical Disk/avg.disk Sec/transfer and memory/pages/sec counters several times. If the count of these counters exceeds 0.1, paging will take more than 10% of the disk access time. If this happens for a long time, you may need more memory.

Activities of the research program

Next, check that the running program is causing too many page exchanges. If possible, stop the program with the highest working set value, and then see if the page exchange rate has changed significantly. If you suspect that there are too many page exchanges, check the memory/pages/sec counter. This counter shows the number of pages that need to be read from disk because the page is not in physical memory. (Note the difference between this counter and Page faults/sec, which only indicates that the data cannot be used immediately in the specified working set of memory.) )

Performance Test Processor chapter (Windows)

Monitoring the processor and system object counters can provide valuable information about the use of handlers to help you decide if there is a bottleneck. You need to include the following:

    • processor/% Total Processor time gets the overall processor usage.

The count value is used to represent the overall processor utilization of the server, which is the average utilization of all CPUs for multiprocessor systems. If the value continues to exceed 90%, the entire system faces a processor bottleneck and needs to increase performance by increasing the processor.

It should be noted that due to the characteristics of the operating system itself, in some multi-CPU systems, the data itself is not large, but at this time the load situation between the CPU is extremely uneven, it should also be seen as a system to produce a processor bottleneck.

    • Monitor processor/% Processor time, processor/% User time, and% Privileged time for more information.

processor/% User time refers to the CPU times consumed by non-core operations of the system, and if the value is large, consider whether to reduce this value by means of an optimization algorithm. If the server is a database server, the reason why the processor/% User time is large is probably that the database is sorted or the function operation consumes too much CPU time, so you can consider optimizing the database system.

    • The System/processor Queue Length is used for bottleneck detection.

%total Processor Time
The percentage of time that all processors in the system are busy, and for multiprocessor systems, this value can reflect the average busy state of all processors, which is 100% if half of the processors are busy, and this value is 50%

File Data operations/sec
How often the computer reads and writes to the file system, but does not include file control operations

Process Queue Length
The thread waits for the length of the column queued to allocate CPU resources, and this length does not include the thread that is occupying the CPU resource. If the queue is longer than the number of processors +1, the processor may be in a blocked state (reference value: <= number of processors + 1)

%processor time
CPU utilization, this counter is most commonly used to see if the processor is saturated, if the value continues to exceed 95%, it means that the current system bottleneck is the CPU, you can consider adding a processor or replacing a better performance processor. (reference value: <80%)

%priviliaged time
The percentage of time that the CPU spends processing threads in privileged mode. General system services, city management, memory management and some other processes that are self-initiated by the operating system belong to this type of

%user time
This is the opposite of the%privileged time counter, which refers to the percentage of times spent in user state mode (that is, non-privileged mode) operations. If the value is large, consider whether to reduce this value by means of algorithm optimization. If the server is a database server, the reason why this value is large is likely to be that the database is sorted or the function operation consumes too much CPU time, you can consider optimizing the database system at this time.

%DPC time
The time that the processor consumes on network processing, and the lower the value the better. In multiprocessor systems, if this value is greater than 50% and%processor time is very high, joining a network card may improve performance.

Observe the value of the processor usage

To measure the activity of the processor, see the processor/% Processor time counter. This counter shows the percentage of time that the processor is busy executing a non-idle thread.

when checking processor usage, consider the role of the computer and the completion work of the type. Depending on the work of the computer, a higher processor value means that the system is effectively handling heavier workloads or is trying to maintain them. For example, if you are monitoring a user's computer and the computer is used for calculations, the calculation program may be easy to use 100% of the processor time. Even if this causes the performance of other applications on the computer to be affected, it can be resolved by changing the load.

On the other hand, in server computers that handle many customer requests, a value of around 100% indicates that these processes are in the queue, waiting for processor time, and causing bottlenecks. Such continuous high-level processor use is unacceptable to the server.

Investigating processor bottlenecks

Processor bottlenecks are progressively displayed when the process's threads require a processor cycle that exceeds the available cycles. A longer processor queue can be established and the system response is affected. Two common causes of processor bottlenecks are CPU throttling programs and drivers or subsystem components that produce excessive interrupts.

To determine if a processor bottleneck exists due to a high processor time requirement, check the System/processor Queue Length counter. Having two or more items in the queue indicates a bottleneck. If more than one program process competes for most processor time, installing a faster processor increases throughput. Attaching a processor can be helpful if you are running a multithreaded process, but be aware that additional processors may have limited benefits.

In addition, the server work Queues/queue length counter that tracks the current lengths of the servers working queue for the computer displays the processor bottleneck. A continuous queue length of more than 4 indicates processor congestion may occur. This counter is a value for a specific time, not an average over time.

To determine whether the interrupt activity is causing a bottleneck, observe the value of the processor/interrupts/sec counter, which measures the speed of service requests from input/output (I/O) devices. If the value of this counter increases significantly and system activity does not increase correspondingly, there is a hardware problem.

You can also monitor processor/% Interrupt time for indirect indicators that generate interrupted disk drives, network cards, and other device activity.

Attention

To detect hardware problems that may affect processor performance, such as IRQ conflicts, observe the value of System/file Control Bytes/second.

Monitor multi-processor systems

To observe the efficiency of a multiprocessor computer, use the following additional counters.

Counter

Description

process/% Processor Time

The sum of the processor time on each processor for all threads of the process.

Processor (_total)/% Processor time

The measurement of processor activity for all processors in the computer.
"N[{y8_0 the sum of the average non-idle time for all processors during this counter sampling interval, and divide the number of processors by that and. 51Testing Software Testing Network


T#e_5i:[email protected] "a:x:y
For example, if all processors are busy averaging half a sampling interval, 50% is displayed. If half of the processors are busy with the entire interval, and the other processors are idle, 50% is also displayed.

thread/% Processor Time

Number of processor times for threads

Performance test disk (Windows)

Monitoring object: PhysicalDisk
If the counter metrics analyzed are from a database server, a file server, or a streaming media server, disk I/O is more likely to be a bottleneck for these systems.

per-disk I/O number can be used to compare the I/O capability of the disk, and if the number of per-disk I/Os computed exceeds the nominal I/O capability of the disk, there is a real disk performance bottleneck.

The following table shows the calculation formulas for I/O per disk:

RAID type

Calculation method

RAID0

(reads+writes)/number of Disks

RAID1

(reads+2*writes)/2

RAID5

[Reads+ (4*writes)]/number of Disks

RAID10

[Reads+ (2*writes)]/number of Disks


%disk time
Indicates the percentage of time that the disk drive is serving a read or write request, and if only%disk is large, the hard disk may be a bottleneck

Average Disk Queue Length
Represents the percentage of time that disk reads and write requests are served, and can increase performance by increasing the disk fabric array (twice times the number of <= disks)

Average Disk Read Queue Length
Indicates the average number of disk read requests

Average Disk Write Queue Length
Indicates the average number of disk write requests

Average Disk Sec/read
Average time to read data on disk in S

Disk BYTES/SEC provides the throughput rate of the drive system.
Determining the balance of workloads
To balance the load on a network server, you need to know how busy the server disk drives are. Use the physical disk/% Disk Time counter, which displays the percentage of drive activity times. If the% Disk Time is higher (more than 90%), check the physical disk/current disk Queue Length counter to see the number of system requests that are waiting for disk access. The number of wait I/O requests should remain at 1.5 to twice times greater than the number of spindles that make up the physical disk.

Average Disk Sec/transfer
Average time to write data on disk in S

The counter reflects the time that the disk took to complete the request. A higher value indicates that the disk controller has repeatedly retried the disk because of a failure. These failures increase the average disk transfer time. In general, the definition of this value is less than 15ms is the best, between 15-30ms is good, between 30-60ms is acceptable, more than 60ms need to consider the replacement of hard disk or hard disk RAID mode

Average Disk Bytes/transfer

A value greater than KB indicates that the disk drive generally works well, and a lower value is generated if the application is accessing the disk. For example, an application that randomly accesses a disk increases the average disk Sec/transfer time because random transfers require additional search time.

Network chapter of performance Testing (Windows)

Monitoring object: Network Interface
Network analysis is a high technical content of the work , in general organizations have dedicated network managers network analysis, for test engineers, if the network is suspected of the bottleneck of the system, You can ask the network to still have a photo for the network detection.

Network interface/bytes Total/sec is the rate at which bytes are sent and received (including frame characters). The value of this counter can be used to determine whether the network connection speed is a bottleneck, the method is to use the value of this counter and the current network bandwidth comparison.

Byte total/sec
Represents the speed at which bytes are accepted and sent in the network, which can be used to determine if the network has a bottleneck (reference value: Divide the counter and network bandwidth, <50%)

The process of performance testing (Windows)

To view the%processor time value of a process

the%processor time of each process reflects the processor times consumed by the process. By comparing the processor time consumed by different processes, it is easy to see which process is consuming the most processor time in the performance test process, so that the application can be optimized accordingly.

View page failures generated by each process

The ratio of page invalidation (obtained by Process/page failures/sec counter) and the system's page failure (available through the Memory/page failures/sec counter) generated by each process can be used to determine which process produced the most invalid page, This process is either a process that requires a lot of memory, or it is a very active process that can be analyzed in its own right.

Understand the process/private of a process Bytes

Process/private bytes refers to the current number of bytes allocated by the process that cannot be shared with other processes. This counter mainly uses the pull to determine the process during the performance test process has no memory leaks.

For example, for a Web application on top of IIS, we can focus on monitoring the private Bytes of the Inetinfo process, if the private Bytes counter value of the process is increasing during the performance test, or a period after the performance test is stopped, The private bytes of the process still continues at a high level, indicating that there is a memory leak in the application.

(Note: The counter used by process analysis method mainly includes: Process/%processor time, page failures/sec, page failures/sec, Private Bytes)

RELATED links:

① memory-mapped file mechanism

The memory-mapped file uses virtual memory to map the file to the address space of the process, after which the process operations file, just like the address in the process space, eliminates the time to read and write I/O.

For example, a function that uses memory operations such as memcpy. This method can be used very well in situations where a file or a large file is required to be processed frequently, which is more efficient than normal IO in this way.

With a memory-mapped file you can assume that the operating system has loaded all the files into memory for you, and then you simply move the file pointer to read and write. This way you don't even need to invoke API functions that allocate, free memory blocks, and file input/output, and you can use this as a way to share data between different processes. The use of a memory-mapped file does not actually involve actual file operations, it is more like preserving a visible memory space for each process. As for using memory-mapped files as a way to share data between processes, be more careful because you have to deal with synchronization of data, or your application may well get outdated or wrong data or even crash.

The memory-mapped file itself has some limitations, such as once you generate a memory-mapped file, you cannot change its size during that session. Therefore, memory-mapped files are useful for read-only files and for file operations that do not affect their size. Of course, this does not mean that the file that will cause the change of its size must not be used in memory mapping file method, you can estimate the possible size of the file after the operation, and then generate such a size of a memory map file, and then the length of the file can grow to such a size. We have enough explanations, and then we'll look at the details of the implementation:

    1. call CreateFile Open the file you want to map.
    2. call CreateFileMapping, which asks for the handle that was returned by the previous CreateFile, This function generates a memory-mapped object based on the file object created by the CreateFile function.
    3. call the MapViewOfFile function to map an area of the entire file or the entire file to memory. This function returns a pointer to the first byte mapped to memory.
    4. Use this pointer to read and write files.
    5. call UnmapViewOfFile to de-map the file.
    6. call CloseHandle to close the memory-mapped file. Note the handle to the memory-mapped file must be passed in.
    7. call CloseHandle to close the file. Note You must pass in a handle to the file created by CreateFile.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.