How to detect SQL Server database CPU bottlenecks and memory bottlenecks

Source: Internet
Author: User
Tags high cpu usage jprofiler jconsole
Category:
DB database website Optimization

109 reading comments (0)
Favorites
Report

Directory (?) [+]

  1. 2. Memory bottleneck of SQL database

    1. When memory is insufficient
    2. Suspected Memory leakage
    3. CPU bottleneck
    4. Performance problems encountered
    5. How to locate these performance problems

 

I. SQL database CPU bottleneck

There are many statuses of a working process of SQL Server, including running, runnable, and suspended.

You can check the system monitoring counter processor: % processor time to determine the CPU bottleneck. If the value of this counter is very high. For example, if it lasts for 15-20 minutes and exceeds 80%, the CPU bottleneck occurs.

If you suspect that computer hardware is the main cause of affecting the performance of SQL Server, You can monitor the load of the corresponding hardware through SQL server performance monitor to confirm your guesses and identify system bottlenecks. The following describes some common analysis objects and their parameters.

Memory: Page faults/sec

If this value increases occasionally, it indicates that there were threads competing for memory. If it continues high, memory may be the bottleneck.

Process: Working Set

This parameter of SQL server should be very close to the memory value allocated to SQL Server. In SQL Server settings, if "set working set size" is set to 0, Windows NT determines the working set size of SQL Server. If you set "set working set size" to 1, the size of the work set is the size of memory allocated by sqlserver. In general, it is best not to change the default value of "set working set size.

Process: % processor time

If the value of this parameter continues to exceed 95%, the bottleneck is the CPU. You can consider adding a processor or changing a faster processor.

Processor: % privileged time

If the value of this parameter and the value of physical disk remain high, it indicates that I/O is faulty. Consider replacing a faster hard drive system. In addition, you can set tempdb in Ram to reduce "Max async Io" and "Max lazy writer Io.

Processor: % USER time

Indicates CPU-consuming database operations, such as sorting and executing Aggregate functions. If the value is very high, you can consider increasing the index and try to reduce the value by using simple table join and horizontal table segmentation methods.

Physical Disk: avg. Disk Queue Length

This value should not exceed 1.5 of the number of disks ~ 2 times. To improve performance, you can add disks.

Note: A raid disk actually has multiple disks.

Sqlserver: cache hit ratio

The higher the value, the better. If the duration is lower than 80%, consider increasing the memory. Note that the value of this parameter is accumulated after SQL Server is started. Therefore, after running for a period of time, this value cannot reflect the current value of the system.

Another way to detect CPU pressure isCalculates the number of worker processes that can run.You can obtain this information by executing the following DMV query:

Select count (*) as workers_waiting_for_cpu, t2.scheduler _ id

From SYS. dm_ OS _workers as T1, SYS. dm_ OS _schedulers as T2

Where t1.state = 'runnable' and t1.scheduler _ address = t2.scheduler _ address

And t2.scheduler _ id <255

Group by t2.scheduler _ id

You can also run the following query to obtain the time that a worker spends in a running state:

Select sum (signal_wait_time_ms) from SYS. dm_ OS _wait_stats

The following query is used to find the top 100 Most CPU-consuming queries each execution:

Select Top 100 total_worker_time/execution_count as avg_cpu_cost, plan_handle, execution_count,

(Select substring (text, statement_start_offset/2 + 1,

(Case when statement_end_offset =-1 then Len (convert (nvarchar (max), text) * 2

Else statement_end_offset end-statement_end_offset)/2)

From SYS. dm_exec_ SQL _text (SQL _handle) as query_text

From SYS. dm_exec_query_stats

Order by avg_cpu_cost DESC

Make a slight modification to find the most frequently run query:

Select Top 100 total_worker_time/execution_countasavg_cpu_cost, plan_handle, execution_count,

(Select substring (text, statement_start_offset/2 + 1,

(Case when statement_end_offset =-1 then Len (convert (nvarchar (max), text) * 2

Else statement_end_offset end-statement_end_offset)/2)

From SYS. dm_exec_ SQL _text (SQL _handle) as query_text

From SYS. dm_exec_query_stats

Order by execution_count DESC

You can use the following system monitoring performance counters to view the compilation and recompilation speeds:

1. sqlserver: SQL statistics: batchrequests/sec (number of batch requests per second)

2. sqlserver: SQL statistics: sqlcompilations/sec (number of SQL compilations per second)

3. sqlserver: SQL statistics: sqlrecompilations/sec (number of SQL recompilation times per second)

You can also use the following statement to obtain the time spent by sqlserver in optimizing the query plan:

Select * From SYS. dm_exec_query_optimizer_info

Where counter = 'optimizations 'or counter = 'elapsed time'

The following query finds the top 10 most compiled query plans:

Select top 10 plan_generation_num, execution_count,

(Select substring (text, statement_start_offset/2 + 1,

(Case when statement_end_offset =-1 then Len (convert (nvarchar (max), text) * 2

Else statement_end_offsetEND-statement_end_offset)/2)

From SYS. dm_exec_ SQL _text (SQL _handle) asquery_text

From SYS. dm_exec_query_stats

Where plan_generation_num> 1

Order by plan_generation_num DESC

Ii. Memory bottleneck of SQL database

When the memory is under pressure, a query plan may have to be removed from the memory. If this plan is submitted for execution again, it must be optimized again. Because query optimization is CPU-intensive, this puts pressure on the CPU. Similarly, when the memory is under pressure, the database page may need to be removed from the buffer pool. If these pages are selected again soon, more physical Io will be generated.

Generally, memory refers to the available physical memory (RAM) on the server ). Another memory type is virtual address space (VAS) or virtual memory. On Windows systems, all bit applications have a process address space of GB, which is used to obtain the maximum physical memory of GB. In addition to the available memory of GB, the process can also get the VAS of GB in user mode. In addition, the GB retention can only be obtained in kernel mode. To change this configuration, use the/3 GB switch in the boot. ini file.

The common operating system mechanism is page debugging, which uses a swap file to store some of the memory of recently unused processes. When this memory is referenced again, it directly reads (or calls in) the physical memory from the swap file.

You can monitor the following parameters through the performance counter:

1. Memory: available bytes)

2. SQL SERVER: Buffer Manager: buffer cache hit rate (buffer cache hit ratio) refers to the proportion of pages directly found in the buffer pool without reading through the disk. For most products, this value should be large. (The bigger the value, the better)

3. SQL SERVER: Buffer Manager: Page life expectancy refers to the number of seconds that a page that is not referenced is retained in the buffer pool. If the value is low, the buffer pool has insufficient memory.

4. SQL SERVER: Buffer Manager: Checkpoint page/second (checkpoint pages/sec) refers to the number of pages refreshed by the checkpoint, or the number of other operations that require all dirty pages to be refreshed. It displays the amount of buffer pool activity that is added to the workload.

5. SQL Server: Buffer Manager: latency write/second (lazywrites/sec) refers to the number of buffer writes by the buffer manager's latency writer, which is similar to the previous checkpoint page/second.

 

When you suspect that the memory is insufficient:

Method 1:

[Metrics]: memory available Mbytes, memory pages/sec, page read/sec, page faults/sec

[Reference value ]:

If the page reads/sec ratio persists to 5, the memory may be insufficient.

Page/sec recommends 00-20 (this value will remain high if the server does not have enough memory to handle its workload. If it is greater than 80, it indicates a problem ).

Method 2: analyze performance bottlenecks based on physical disk values

[Metrics]: memory available Mbytes, pages read/sec, % disk Time and AVG. Disk Queue Length

[Reference value]: % recommended disk time threshold of 90%

When the memory is insufficient, some processes will be transferred to the hard disk for operation, resulting in a sharp decline in performance, and a system with a lack of memory often shows a high CPU utilization, because it needs to constantly scan the memory, move the pages in the memory to the hard disk.

Suspected Memory leakage

[Metrics]: memory available Mbytes, Process \ private bytes and Process \ working set, physicaldisk/% disk Time

[Note ]:

WindowsIn resource monitoring, if the value of the Process \ private bytes counter and the process \ working set counter continues to increase for a long time, and the value of the memory \ available bytes counter continues to decrease, memory leakage may occur. Memory leakage should be tested through a long period of time to study and analyze the test of application response when all memory is exhausted.

 

 

 

CPU bottleneck

1. If the value of System \ % Total processor time exceeds 90% and is blocked by the processor, the entire system is facing a bottleneck in terms of the processor.

Note: In some multi-CPU Systems, although the data itself is not large, the load between CPUs is extremely unbalanced. At this time, it should also be seen that the system has a bottleneck on the processor.

2. Exclude the memory. If the processor % processor time counter value is large while the NIC and hard disk values are low, you can determine the CPU bottleneck. (When the memory is insufficient, some processes will be transferred to the hard disk for operation, resulting in a sharp decline in performance, and a system with a lack of memory often shows a high CPU utilization, because it needs to constantly scan the memory, move the pages in the memory to the hard disk .)

Reasons for high CPU usage:

Frequent program execution, complex operations, and high CPU consumption

Complex Database Query statements, a large number of where clauses, order by, group by sorting, and so on, the CPU is prone to bottlenecks

Insufficient memory, Io disk issues increase CPU overhead

 

CPU Analysis

[Metrics ]:

System % processor time CPU, processor % processor time CPU

Processor % USER time and processor % privileged time

System \ processor Queue Length

Context switches/sec and % privileged time

[Reference value ]:

System \ % Total processor time does not last more than 90%, if the server is dedicatedSQLServer. The maximum acceptable value is 80-85%. The valid range is 60% to 70%.

Processor % processor time less than 75%

System \ processor queue length value, less than the total number of CPUs + 1

Disk I/O Analysis

[Metric]: physicaldisk/% disk Time, physicaldisk/% idle time, physical disk \ avg. Disk queue length, disk SEC/Transfer

[Reference value]: % recommended disk time threshold of 90%

In Windows resource monitoring, if the value of % disk Time and AVG. Disk queue length is very high, and the page reads/sec page read speed is very low, there may be disk bottle diameter.

Processor % privileged time this parameter value has been very high, and if in physical disk counter, only % disk Time is relatively large, other values are relatively moderate, hard disk may be a bottleneck. If several values are relatively large, the hard disk is not a bottleneck. If the value exceeds 80%, memory leakage may occur. If the physical disk counter value is very high and the value of this counter (processor % privileged time) remains high, consider using a faster or more efficient disk subsystem.

Disk SEC/transfer in general, this value is the best if less than 15 ms, between 15-30 ms is good, between 30-60 ms is acceptable, more than 60 ms need to consider replacing the hard disk or hard disk raid mode.

Average transaciton response time (average transaction response time) as the test time changes, the speed at which the system processes transactions gradually slows down, which indicates that the application system changes with the production time, overall performance will decrease

Transactions per second (number of transactions per second/TPS) when the pressure increases, if the click rate/TPS curve changes slowly or flat, it is likely that the server is beginning to experience a bottleneck

Hits per second (number of clicks per second) can be used to check the "Number of clicks per second" to determine whether the system is stable. The decrease in the system click rate usually indicates that the server's response speed is slowing down. Further analysis is required to find out the bottleneck of the system.

Throughput can be used to evaluate the workload generated by virtual users based on the server throughput, as well as to check whether the server's traffic processing capabilities and bottlenecks exist.

Connections (connections) when the number of connections reaches a stable state and the transaction response time increases rapidly, adding connections can greatly improve the performance (the transaction response time will be reduced)

Time to first buffer breakdown (over time) (the first buffer time segment (changed with time) can be used to determine the time when a server or network problem occurs during a scenario or session step.

Performance problems encountered:
  • 1. Handling Failures in the case of high concurrency (for example, the database connection pool is too low, the number of server connections exceeds the upper limit, and the database lock control is insufficient)
  • 2. Memory leakage (for example, if the memory is not released normally and goes down for a long time)
  • 3. CPU usage deviation (for example, high concurrency leads to high CPU usage)
  • 4. Excessive log printing and no hard disk space on the server
How to locate these performance problems:

1. view system logs. logs are the best way to locate problems. If the logs are fully recorded, you can easily find problems through logs.

For example, when the system goes down, the system log prints out of memory errors when a method is executed, so we can quickly locate the problem that causes memory overflow.

2. using performance monitoring tools, such as Java to develop B/S-structured projects, you can monitor server performance through JDK's built-in jconsole or jprofiler. jconsole can remotely monitor the server's CPU, memory, thread, and other States, and draw a change curve.

Spotlight can be used to monitor database usage.

We need to pay attention to the following performance points: CPU load, memory usage, network I/O, etc.

3. Tools and logs are only a means. In addition, a reasonable performance test scenario needs to be designed.

Specific scenarios include performance testing, load testing, stress testing, stability testing, and surge testing.

A good test scenario allows you to quickly discover and locate bottlenecks.

4. Understand system parameter configuration and perform performance tuning later

In addition, I also want to talk about the use of performance testing tools.

At the beginning, when LoadRunner and jmeter were used for high-concurrency testing, the two programs did not crush the server.

If this problem occurs, you can remotely call the services of multiple clients to relieve the pressure on the performance testing tool client.

The purpose of this article is to say that during performance testing, we must ensure that the bottleneck does not occur in our own test scripts and test tools.

I. SQL database CPU bottleneck

There are many statuses of a working process of SQL Server, including running, runnable, and suspended.

You can check the system monitoring counter processor: % processor time to determine the CPU bottleneck. If the value of this counter is very high. For example, if it lasts for 15-20 minutes and exceeds 80%, the CPU bottleneck occurs.

If you suspect that computer hardware is the main cause of affecting the performance of SQL Server, You can monitor the load of the corresponding hardware through SQL server performance monitor to confirm your guesses and identify system bottlenecks. The following describes some common analysis objects and their parameters.

Memory: Page faults/sec

If this value increases occasionally, it indicates that there were threads competing for memory. If it continues high, memory may be the bottleneck.

Process: Working Set

This parameter of SQL server should be very close to the memory value allocated to SQL Server. In SQL Server settings, if "set working set size" is set to 0, Windows NT determines the working set size of SQL Server. If you set "set working set size" to 1, the size of the work set is the size of memory allocated by sqlserver. In general, it is best not to change the default value of "set working set size.

Process: % processor time

If the value of this parameter continues to exceed 95%, the bottleneck is the CPU. You can consider adding a processor or changing a faster processor.

Processor: % privileged time

If the value of this parameter and the value of physical disk remain high, it indicates that I/O is faulty. Consider replacing a faster hard drive system. In addition, you can set tempdb in Ram to reduce "Max async Io" and "Max lazy writer Io.

Processor: % USER time

Indicates CPU-consuming database operations, such as sorting and executing Aggregate functions. If the value is very high, you can consider increasing the index and try to reduce the value by using simple table join and horizontal table segmentation methods.

Physical Disk: avg. Disk Queue Length

This value should not exceed 1.5 of the number of disks ~ 2 times. To improve performance, you can add disks.

Note: A raid disk actually has multiple disks.

Sqlserver: cache hit ratio

The higher the value, the better. If the duration is lower than 80%, consider increasing the memory. Note that the value of this parameter is accumulated after SQL Server is started. Therefore, after running for a period of time, this value cannot reflect the current value of the system.

Another way to detect CPU pressure isCalculates the number of worker processes that can run.You can obtain this information by executing the following DMV query:

Select count (*) as workers_waiting_for_cpu, t2.scheduler _ id

From SYS. dm_ OS _workers as T1, SYS. dm_ OS _schedulers as T2

Where t1.state = 'runnable' and t1.scheduler _ address = t2.scheduler _ address

And t2.scheduler _ id <255

Group by t2.scheduler _ id

You can also run the following query to obtain the time that a worker spends in a running state:

Select sum (signal_wait_time_ms) from SYS. dm_ OS _wait_stats

The following query is used to find the top 100 Most CPU-consuming queries each execution:

Select Top 100 total_worker_time/execution_count as avg_cpu_cost, plan_handle, execution_count,

(Select substring (text, statement_start_offset/2 + 1,

(Case when statement_end_offset =-1 then Len (convert (nvarchar (max), text) * 2

Else statement_end_offset end-statement_end_offset)/2)

From SYS. dm_exec_ SQL _text (SQL _handle) as query_text

From SYS. dm_exec_query_stats

Order by avg_cpu_cost DESC

Make a slight modification to find the most frequently run query:

Select Top 100 total_worker_time/execution_countasavg_cpu_cost, plan_handle, execution_count,

(Select substring (text, statement_start_offset/2 + 1,

(Case when statement_end_offset =-1 then Len (convert (nvarchar (max), text) * 2

Else statement_end_offset end-statement_end_offset)/2)

From SYS. dm_exec_ SQL _text (SQL _handle) as query_text

From SYS. dm_exec_query_stats

Order by execution_count DESC

You can use the following system monitoring performance counters to view the compilation and recompilation speeds:

1. sqlserver: SQL statistics: batchrequests/sec (number of batch requests per second)

2. sqlserver: SQL statistics: sqlcompilations/sec (number of SQL compilations per second)

3. sqlserver: SQL statistics: sqlrecompilations/sec (number of SQL recompilation times per second)

You can also use the following statement to obtain the time spent by sqlserver in optimizing the query plan:

Select * From SYS. dm_exec_query_optimizer_info

Where counter = 'optimizations 'or counter = 'elapsed time'

The following query finds the top 10 most compiled query plans:

Select top 10 plan_generation_num, execution_count,

(Select substring (text, statement_start_offset/2 + 1,

(Case when statement_end_offset =-1 then Len (convert (nvarchar (max), text) * 2

Else statement_end_offsetEND-statement_end_offset)/2)

From SYS. dm_exec_ SQL _text (SQL _handle) asquery_text

From SYS. dm_exec_query_stats

Where plan_generation_num> 1

Order by plan_generation_num DESC

Ii. Memory bottleneck of SQL database

When the memory is under pressure, a query plan may have to be removed from the memory. If this plan is submitted for execution again, it must be optimized again. Because query optimization is CPU-intensive, this puts pressure on the CPU. Similarly, when the memory is under pressure, the database page may need to be removed from the buffer pool. If these pages are selected again soon, more physical Io will be generated.

Generally, memory refers to the available physical memory (RAM) on the server ). Another memory type is virtual address space (VAS) or virtual memory. On Windows systems, all bit applications have a process address space of GB, which is used to obtain the maximum physical memory of GB. In addition to the available memory of GB, the process can also get the VAS of GB in user mode. In addition, the GB retention can only be obtained in kernel mode. To change this configuration, use the/3 GB switch in the boot. ini file.

The common operating system mechanism is page debugging, which uses a swap file to store some of the memory of recently unused processes. When this memory is referenced again, it directly reads (or calls in) the physical memory from the swap file.

You can monitor the following parameters through the performance counter:

1. Memory: available bytes)

2. SQL SERVER: Buffer Manager: buffer cache hit rate (buffer cache hit ratio) refers to the proportion of pages directly found in the buffer pool without reading through the disk. For most products, this value should be large. (The bigger the value, the better)

3. SQL SERVER: Buffer Manager: Page life expectancy refers to the number of seconds that a page that is not referenced is retained in the buffer pool. If the value is low, the buffer pool has insufficient memory.

4. SQL SERVER: Buffer Manager: Checkpoint page/second (checkpoint pages/sec) refers to the number of pages refreshed by the checkpoint, or the number of other operations that require all dirty pages to be refreshed. It displays the amount of buffer pool activity that is added to the workload.

5. SQL Server: Buffer Manager: latency write/second (lazywrites/sec) refers to the number of buffer writes by the buffer manager's latency writer, which is similar to the previous checkpoint page/second.

 

When you suspect that the memory is insufficient:

Method 1:

[Metrics]: memory available Mbytes, memory pages/sec, page read/sec, page faults/sec

[Reference value ]:

If the page reads/sec ratio persists to 5, the memory may be insufficient.

Page/sec recommends 00-20 (this value will remain high if the server does not have enough memory to handle its workload. If it is greater than 80, it indicates a problem ).

Method 2: analyze performance bottlenecks based on physical disk values

[Metrics]: memory available Mbytes, pages read/sec, % disk Time and AVG. Disk Queue Length

[Reference value]: % recommended disk time threshold of 90%

When the memory is insufficient, some processes will be transferred to the hard disk for operation, resulting in a sharp decline in performance, and a system with a lack of memory often shows a high CPU utilization, because it needs to constantly scan the memory, move the pages in the memory to the hard disk.

Suspected Memory leakage

[Metrics]: memory available Mbytes, Process \ private bytes and Process \ working set, physicaldisk/% disk Time

[Note ]:

WindowsIn resource monitoring, if the value of the Process \ private bytes counter and the process \ working set counter continues to increase for a long time, and the value of the memory \ available bytes counter continues to decrease, memory leakage may occur. Memory leakage should be tested through a long period of time to study and analyze the test of application response when all memory is exhausted.

 

 

 

CPU bottleneck

1. If the value of System \ % Total processor time exceeds 90% and is blocked by the processor, the entire system is facing a bottleneck in terms of the processor.

Note: In some multi-CPU Systems, although the data itself is not large, the load between CPUs is extremely unbalanced. At this time, it should also be seen that the system has a bottleneck on the processor.

2. Exclude the memory. If the processor % processor time counter value is large while the NIC and hard disk values are low, you can determine the CPU bottleneck. (When the memory is insufficient, some processes will be transferred to the hard disk for operation, resulting in a sharp decline in performance, and a system with a lack of memory often shows a high CPU utilization, because it needs to constantly scan the memory, move the pages in the memory to the hard disk .)

Reasons for high CPU usage:

Frequent program execution, complex operations, and high CPU consumption

Complex Database Query statements, a large number of where clauses, order by, group by sorting, and so on, the CPU is prone to bottlenecks

Insufficient memory, Io disk issues increase CPU overhead

 

CPU Analysis

[Metrics ]:

System % processor time CPU, processor % processor time CPU

Processor % USER time and processor % privileged time

System \ processor Queue Length

Context switches/sec and % privileged time

[Reference value ]:

System \ % Total processor time does not last more than 90%, if the server is dedicatedSQLServer. The maximum acceptable value is 80-85%. The valid range is 60% to 70%.

Processor % processor time less than 75%

System \ processor queue length value, less than the total number of CPUs + 1

Disk I/O Analysis

[Metric]: physicaldisk/% disk Time, physicaldisk/% idle time, physical disk \ avg. Disk queue length, disk SEC/Transfer

[Reference value]: % recommended disk time threshold of 90%

In Windows resource monitoring, if the value of % disk Time and AVG. Disk queue length is very high, and the page reads/sec page read speed is very low, there may be disk bottle diameter.

Processor % privileged time this parameter value has been very high, and if in physical disk counter, only % disk Time is relatively large, other values are relatively moderate, hard disk may be a bottleneck. If several values are relatively large, the hard disk is not a bottleneck. If the value exceeds 80%, memory leakage may occur. If the physical disk counter value is very high and the value of this counter (processor % privileged time) remains high, consider using a faster or more efficient disk subsystem.

Disk SEC/transfer in general, this value is the best if less than 15 ms, between 15-30 ms is good, between 30-60 ms is acceptable, more than 60 ms need to consider replacing the hard disk or hard disk raid mode.

Average transaciton response time (average transaction response time) as the test time changes, the speed at which the system processes transactions gradually slows down, which indicates that the application system changes with the production time, overall performance will decrease

Transactions per second (number of transactions per second/TPS) when the pressure increases, if the click rate/TPS curve changes slowly or flat, it is likely that the server is beginning to experience a bottleneck

Hits per second (number of clicks per second) can be used to check the "Number of clicks per second" to determine whether the system is stable. The decrease in the system click rate usually indicates that the server's response speed is slowing down. Further analysis is required to find out the bottleneck of the system.

Throughput can be used to evaluate the workload generated by virtual users based on the server throughput, as well as to check whether the server's traffic processing capabilities and bottlenecks exist.

Connections (connections) when the number of connections reaches a stable state and the transaction response time increases rapidly, adding connections can greatly improve the performance (the transaction response time will be reduced)

Time to first buffer breakdown (over time) (the first buffer time segment (changed with time) can be used to determine the time when a server or network problem occurs during a scenario or session step.

Performance problems encountered:
  • 1. Handling Failures in the case of high concurrency (for example, the database connection pool is too low, the number of server connections exceeds the upper limit, and the database lock control is insufficient)
  • 2. Memory leakage (for example, if the memory is not released normally and goes down for a long time)
  • 3. CPU usage deviation (for example, high concurrency leads to high CPU usage)
  • 4. Excessive log printing and no hard disk space on the server
How to locate these performance problems:

1. view system logs. logs are the best way to locate problems. If the logs are fully recorded, you can easily find problems through logs.

For example, when the system goes down, the system log prints out of memory errors when a method is executed, so we can quickly locate the problem that causes memory overflow.

2. using performance monitoring tools, such as Java to develop B/S-structured projects, you can monitor server performance through JDK's built-in jconsole or jprofiler. jconsole can remotely monitor the server's CPU, memory, thread, and other States, and draw a change curve.

Spotlight can be used to monitor database usage.

We need to pay attention to the following performance points: CPU load, memory usage, network I/O, etc.

3. Tools and logs are only a means. In addition, a reasonable performance test scenario needs to be designed.

Specific scenarios include performance testing, load testing, stress testing, stability testing, and surge testing.

A good test scenario allows you to quickly discover and locate bottlenecks.

4. Understand system parameter configuration and perform performance tuning later

In addition, I also want to talk about the use of performance testing tools.

At the beginning, when LoadRunner and jmeter were used for high-concurrency testing, the two programs did not crush the server.

If this problem occurs, you can remotely call the services of multiple clients to relieve the pressure on the performance testing tool client.

The purpose of this article is to say that during performance testing, we must ensure that the bottleneck does not occur in our own test scripts and test tools.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.