Performance counters for LoadRunner monitoring

Source: Internet
Author: User
Tags flushes memory usage disk usage server memory
performance counters for loadrunner monitoring today, I put up some of my counters and their threshold requirements, etc., which are for my Windows operating system, C/s structure of SQL Server database and Web platform. NET product test some of the counters; we can continue to add, On UNIX platform for Oracle database testing and Java EE Architecture and WebLogic aspects of the testing of friends, but also want to use their own counters posted, let everyone share.

Well, first of all, I hope that through this project, we can finally let you analyze your test results.

Memory: Memory usage may be the most important factor in system performance. If the System "page Exchange" is frequent, it indicates that there is not enough memory. The page exchange is the process of moving fixed sized code and blocks of data from RAM to disk using units called pages to free up memory space. Although some page exchanges allow Windows 2000 to use more memory than actual, it is also acceptable, but frequent paging will degrade system performance. Reducing page switching will significantly improve system response speed. To monitor for a low memory condition, start with the following object counters:

Available Mbytes: The number of physical memory available. If the value of the available MBytes is small (4 MB or less), the total memory may be insufficient on the computer or the program is not releasing memory.

Page/sec: Indicates the number of pages removed from disk due to a hardware page error, or the number of pages written to disk to free working set space due to a page fault. Generally if the pages/sec continues above hundreds of, you should further study the paging activity. It may be necessary to increase the memory to reduce the need for page change (you can multiply this number by 4 K to get the resulting hard drive data flow). A large pages/sec value does not necessarily indicate a memory problem, but may be the result of running a program that uses memory-mapped files.

Page Read/sec: A hard failure of pages, a subset of PAGE/SEC, the number of times a page file must be read in order to parse a reference to memory. The threshold value is >5. The lower the better. A large value indicates that disk reads rather than cached reads.

Because too many page exchanges use a large amount of hard disk space, it is possible that the paging memory is not enough to be confused with the disk bottle path that causes the paging to be exchanged. Therefore, you must track the following disk usage counters and memory counters when studying the cause of a page exchange that is less than obvious in memory:

Physical disk/% Disk time

Physical Disk/avg.disk Queue Length

For example, include Page reads/sec and% Disk time and Avg.Disk Queue Length. If the page read operation rate is very low and the value of% Disk Time and Avg.Disk Queue length is high, there may be a disk bottle path. However, if the queue length increases while the page read rate does not decrease, there is not enough memory.

To determine the effect of excessive paging on disk activity, increase the value of the physical Disk/avg.disk Sec/transfer and memory/pages/sec counters by several times. If the count of these counters exceeds 0.1, then the paging will cost more than 10% of the disk access time. If this happens for a long time, you may need more memory.

Page Faults/sec: The number of soft page failures per second (including some that can be satisfied directly in memory and some that need to be read from the hard disk) is more page/sec than just indicating that the data cannot be used immediately in the specified working set of memory.

Cache Bytes: File system cache, which by default is 50% of available physical memory. If IIS5.0 runs out of memory, it automatically collates the cache. Need to pay attention to the trend change of this counter

If you suspect a memory leak, monitor the memory/available Bytes and memory/committed Bytes to observe the memory behavior and monitor the process/private Bytes of processes that you think might be leaking memory, Process/working Set and Process/handle Count. If you suspect that the kernel-mode process is causing the leak, you should also monitor Memory/pool nonpaged Bytes, Memory/pool nonpaged Allocs and process (process_name)/Pool nonpaged B Ytes.

Pages per second: The number of pages retrieved every second. This number should be less than one page per second.

Process:

%processor Time: The number of processor hours consumed by the processor. If the server is dedicated to SQL Server, the maximum acceptable limit is 80-85%

Page Faults/sec: Compares a process-generated paging fault with a system-generated error to determine the impact of this process on system page failures.

Work set: The most recent memory page used by the processing thread, reflecting the number of pages of memory used by each process. If the server has enough free memory, the page will be left in the working set, and the page will be cleared out of the working set when free memory is less than a specific threshold value.

Inetinfo:private Bytes: The current number of bytes allocated by this process that cannot be shared with other processes. If system performance decreases over time, this counter can be the best indicator of memory leaks.

Processor: Monitoring the processor and system object counters can provide valuable information about the use of handlers to help you decide if there is a bottleneck.

%processor time: If the value lasts more than 95%, the bottleneck is CPU. Consider adding a processor or switching to a faster processor.

%user time: CPU-consuming database operations, such as sorting, executing aggregate functions, etc. If the value is high, consider adding an index to reduce the value by using simple table joins, and horizontally dividing the large table.

%privileged time: (CPU kernel times) is the percentage of time spent processing thread execution code in privileged mode. If the value of the parameter and the "physical Disk" parameter values are high, it indicates that I/O is problematic. Consider replacing a faster hard drive system. Setting up tempdb in RAM, reducing max async io, and "Max lazy writer IO" will reduce the value.

In addition, the server Work queues/queue length counter, which tracks the current length of the server work queue for the computer, displays the processor bottleneck. A queue length of longer than 4 indicates a possible processor congestion. This counter is a value for a specific time, not an average for a period of time.

% DPC Time: the lower the better. In a multiprocessor system, if this value is greater than 50% and processor:% Processor time is very high, adding a NIC may improve performance and provide a network that is not saturated.

Thread

Contextswitches/sec: (instantiating Inetinfo and dllhost processes) if you decide to increase the size of the thread byte pool, you should monitor the three counters (including one above). Increasing the number of threads may increase the number of context switches, so that performance does not rise but decreases. If the context switching values for 10 instances are very high, you should reduce the size of the thread byte pool.

Physical Disk:

%disk Time%: Refers to the percentage of times that the selected disk drive is busy servicing a read or write request. If the three counters are large, then the hard drive is not a bottleneck. If only%disk time is large and the other two are moderate, the hard drive may be a bottleneck. Before logging the counter, run Diskperf-yd in the Windows 2000 Command Line window. If the value lasts more than 80%, it could be a memory leak.

Avg.Disk Queue Length: The average number of read and write requests that are queued for the selected disk at the instance interval. This value should be no more than 1.5~2 times the number of disks. To improve performance, you can increase the disk. Note: A RAID disk actually has more than one disk.

Average Disk read/write Queue Length: means the average number of read (write) requests (queues).

Disk Reads (writes)/s: Number of disks read and written per second on the physical disk. In addition, it should be less than the maximum capacity of the disk device.

Average Disksec/read: Refers to the average amount of time required to read data on this disk in seconds.

Average disk Sec/transfer: Refers to the average amount of time required to write data on this disk in seconds.

Network Interface:

Bytes Total/sec: The rate at which bytes are sent and received, including frame characters. To determine whether the network connection speed is a bottleneck, you can use the value of this counter and the current network bandwidth comparison

SQL Server Performance counters:

Access Methods (access method) is used to monitor methods of accessing logical pages in a database.

. Full scans/sec (Total Table Scan/sec) Unlimited number of complete scans per second. Can be either a basic table scan or a full index scan. If this counter shows a value of 1 or 2, you should analyze your query to see if you really need full table scans, and if the S Q L query can be optimized.

. Page splits/sec (page split/sec) the number of pages split per second due to data update operations.

Buffer Manager (Buffer Manager): monitoring Microsoft®sql Server? How to use: Memory storage data pages, internal data structures, and process caching; Counters monitor physical I/O when SQL Server reads database pages from disk and writes database pages to disk. Monitoring the memory and counters used by SQL Server helps determine whether bottlenecks exist because of the lack of data that is frequently accessed in the available physical memory storage cache. If so, SQL Server must retrieve data from disk. Whether you can improve query performance by adding more memory or by making more memory available for data caching or SQL SERVER internal structures.

The frequency with which SQL Server needs to read data from disk. Physical I/O can take a significant amount of time compared to other operations, such as memory access. Minimizing physical I/O can improve query performance.

. Page Reads/sec: The number of physical database page reads issued per second. This statistic shows the total number of physical page reads across all databases. Because of the high cost of physical I/O, you can minimize overhead by using larger data caching, smart indexing, more efficient querying, or changing database design.

. Page writes/sec (. Write Pages/sec) the number of pages written per second of the physical database.

. Buffer Cache Hit Ratio. The ratio of pages in the buffer pool that have not been read Cache/buffer the total buffer pool. The percentage of pages found in the cache that do not need to be read from disk. This ratio is the total number of cache hits divided by the count of lookups to the cache since the instance of SQL Server started. After a long period of time, this ratio changes very little. Since reading data from the cache is much less expensive than reading from disk, it is generally desirable to have a higher value. Typically, you can increase the cache hit rate by increasing the amount of memory available to SQL Server. Counter values are dependent on the application, but the ratio is best 90% or higher. Increase memory Until this value continues above 90%, which means that more than 90% of the data requests can obtain the required data from the data buffer.

. Lazy writes/sec (Lazy Write/sec) the number of buffers written per second by the lazy write process. The best value is 0.

The Cache Manager (cache Manager) object provides counters for monitoring Microsoft®sql Server? How to use memory to store objects, such as stored procedures, special and prepared Transact-SQL statements, and triggers.

. Cache Hit Ratio (cache hit rate, all cache hits). In SQL Server, cache can include log Cache,buffer cache and procedure cache, which is an overall ratio. Ratio of cache hits and lookup times. This is a good counter for viewing how the SQL Server cache works for your system. If this value is low and lasts less than 80%, you need to add more memory.

The latches (latch) is used to monitor internal SQL Server resource locks called latches. Monitor latches to clarify user activity and resource usage to help identify performance bottlenecks.

. Average Latch wait Ti m e (M s) (average latch latency (milliseconds)) an average time, in milliseconds, that a SQL Server thread must wait for a latch. If the value is high, you may be experiencing serious competition problems.

. Latch waits/sec (Latch wait/sec) The number of waits per second on the latch. If the value is high, you are experiencing a lot of competition for resources.

The Locks (lock) provides information about SQL Server locks on individual resource types. Locks are added to SQL Server resources, such as row reads or modifications made in a transaction, to prevent multiple transactions from using resources concurrently. For example, if an exclusive (X) lock is added to a row on a table by a transaction, no other transaction can modify the row until the lock is released. Minimizing the use of locks can improve concurrency and improve performance. You can monitor multiple instances of a Locks object at the same time, and each instance represents a lock on a resource type.

. Number of deadlocks/sec (number of deadlocks/sec) causes a deadlock lock request quantity

. Average wait Time (ms) (average latency (ms)) thread waits for the average latency of a type of lock

. Lock Requests/sec (Lock request/sec) The number of lock requests of some type per second.

Memory Manager: Used to monitor overall server memory usage to estimate user activity and resource usage to help identify performance bottlenecks. Monitoring the memory used by SQL Server instances helps determine:

Bottlenecks exist because of the lack of data that is frequently accessed in the available physical memory storage cache. If so, SQL Server must retrieve data from disk.

Whether you can improve query performance by adding more memory or by making more memory available for data caching or SQL SERVER internal structures.

Lock blocks: The number of lock blocks on the server, and locks are on resources such as pages, rows, or tables. Don't want to see a growth value.

Total server Memory:sql The amount of dynamic memory currently being used by server servers.

Monitor some of the counters that IIS requires

Internet Information Services Global:

File cache Hits%, file cacheflushes, file cache Hits

The file cache Hits% is the proportion of cache hits in all cache requests, reflecting the working conditions of the Files caching settings for IIS. For a site that consists mostly of static Web pages, the value should remain around 80%. While the file cache hits is the specific value of a cached hit, file cacheflushes is the number of cache flushes since the server was started, and if the refresh is too slow, the memory is wasted; If the refresh is too fast, the objects in the cache will discard the build too frequently, and the cache does not function. By comparing the file cache Hits and the file cache flushes, the ratio of cache hit rate to cache purge ratio can be obtained. By looking at its two values, you get an appropriate refresh value (refer to the IIS Settings Objectttl, MemCacheSize, Maxcachefilesize)

Web Service:

Bytes total/sec: Displays the total number of bytes sent and accepted by the Web server. A low value indicates that IIS is transferring data at a lower speed.

Connection refused: The lower the value the better. High values indicate bottlenecks in the network adapter or processor.

Not Found Errors: Displays the number of requests that cannot be satisfied by the server because the requested file cannot be found (HTTP status code 404)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.