Turn: Performance Testing of codenon

Source: Internet
Author: User
Tags jprofiler
1. How to understand TPSAn important factor in performance indicators. TPS (transaction per second, number of transactions per second), the number of completed transactions per unit of time. TPS is generally calculated by dividing the transaction by time. TPS is TestTransaction in the script. In Performance TestingIn the tool, the throughput is also called TPS (transaction per second, the number of transactions per second ). Throughput directly reflects the carrying capacity of system performance, which refers to the number of customer requests processed per unit time. The measurement unit varies according to requirements, such as the number of requests/s, number of pages/s, service count/hour (we can say that the throughput in our collection project can use resolution card count/second ). For interactive applications, the user's direct experience is "Response Time". The system performance planning can be determined through "concurrent users" and "Response Time". However, for non-interactive applications, using "throughput" to describe users' expectations for system performance may be more reasonable. Throughput is a key indicator for performance testing. There is a certain relationship between throughput and the number of concurrent users. When there is no performance bottleneck, the throughput increases with the increase of the number of virtual users (the formula is throughput = (VU count * requests sent by each VU)/unit time ). If the performance encounters a bottleneck, the relationship between throughput and Vu is no longer consistent. 2. How to understand thread callsA thread is the control flow in a single sequence of processes. It is also called a lightweight process. Benefits of a thread: 1. It takes less time to create a new thread. 2. (in Java, threads have several statuses, including new, runable, running, waiting, blocking, sleep, and dead .) In the case of no congestion, two threads (in the same process) have less switching time. In the case of blocking, context switching will occur between threads. 3. Because threads in the same process share memory and files, it is not necessary to call the kernel for inter-thread communication. 4 threads can be executed independently, and the processor can be used and used in parallel with peripheral devices. WorkCapabilities. By using threads, you can place tasks in programs that occupy a long time in the background to process PS: Java can dump the stack information executed by the thread through jstack or jprofiler. 3. How to understand the response timeThe response time reflects the time required to complete a business. In the performance test, the transaction function of the test tool is used to calculate the response time. The transaction function records the time difference between the start and end events, which is described by the word "Transaction Response Time. The response time mainly includes the network time, server processing time, and network latency. for interactive applications, the user's direct experience is "Response Time ", the system performance planning can be determined through "concurrent users" and "Response Time". For interactive applications, a bottleneck may occur when the response time is at an inflection point. 4. How to understand performance modeling (classifier answers)No, I found a document before. Share it with http://www.docin.com/p-452373613.html. 5. How to understand the relationship between response time, TPS curve and usersAs the number of users increases, the response time remains stable before the bottleneck occurs. The TPS value is linearly related to the number of concurrent users. When the bottleneck occurs, the response time becomes longer, and the TPS remains unchanged or begins to decrease. 6. InLoadRunnerWhy do we need to set the thinking time and pacing?1) think time, thinking time. You can set the thinking time to simulate the waiting time of real users during the operation. In terms of definition, think time is the interval of each step in an action within iteration. 2) pacing, Pace. You can set the interval between two iterations to adjust the PACE (or rhythm) between each action ). 3) both pacing and think time can simulate pauses in the real world. For complex scenarios, pacing is required. However, how to set pacing is the most appropriate, and it is necessary to study user behavior. Operating System1. How to Determine the bottleneck of CPU, memory, and disk? CPU bottleneck: 1) view CPU utilization. CPU indicators are recommended as follows a) User time: 65% ~ 70% B) System Time: 30% ~ 35% c) Idle: 0% ~ 5% if the value of us And Sy is higher than this indicator, you can determine the CPU bottleneck. Use top to view and view the running queue. Each CPU maintains a running queue. Ideally, the scheduler keeps running processes in the queue. The process is either in sleep or run able state. If the CPU is overloaded, the Scheduler cannot meet the requirements of the system. As a result, running processes will fill the queue. The larger the queue, the longer the program execution time. "Load" is used to indicate the running queue. With the TOP Command, we can see the size of the running queue in CPU for one minute, 5 minutes, and 15 minutes. The greater the value, the greater the system load. If the traffic exceeds 1.00, the CPU usage exceeds the load and traffic is congested. You can use top or uptime to view the context and switch between each CPU (or each core in a multi-core CPU). Only one thread can be executed at a time. in Linux, preemptible scheduling is used. That is to say, each thread is allocated a certain execution time. When the execution time reaches, there is Io blocking in the thread or the high-priority thread is to be executed, Linux will switch the thread to be executed, during the switchover, the current thread execution status should be stored and the state of the thread to be executed should be restored. This process is called context switching. For Java applications, when file I/O operations, network I/O operations, lock waits, or thread sleep are performed, the current thread will enter the blocking or sleep state, triggering context switching, too many context switches will cause excessive CPU usage by the kernel, reducing the application response speed. Use vmstat to view CS Conclusion:Check the running queue of the system and determine not to exceed the limit of three runable threads for each processor. determine that the user/system ratio in CPU utilization is maintained at 70/30. When the CPU overhead is more time in system mode, this indicates that it is overloaded and the re-scheduling priority should be attempted. When I/O processing increases, application processing in the CPU category will be affected. PS: for Java applications, the CPU bottleneck can be monitored and analyzed through jprofiler. Memory bottleneck:1. view the usage (free) used: How big is used. Free: available. Shared: The total memory shared by multiple processes. Buffers/cached: disk cache size. 2. view Page Swap, swap (PO, PI, so, Si), disk io (vmstat) Si: the size of the swap area written to the memory per second so: the memory size written to the swap zone per second. Page in: page is called page-in page out: page) the process of writing data to a disk is called page-out. In addition, disk I/O is generated during page exchange. You also need to pay attention to bi, total size of the Bo disk block page from memory to files or swap devices Bi disk block page from files or swap devices to memory 3. page fault (pidstat-R, Sar-B) minflt/S: number of page missing errors per second (minor page faults ), number of page missing errors (that is, the number of page failures generated by the ing of the virtual memory address to the physical memory address majflt/S: number of page missing errors per second (Major page faults ), when the virtual memory address is mapped to the physical memory address In swap, such page fault is major page fault, which is usually generated when memory usage is insufficient. in Sar-B, fault/s indicates minflt and majflt sum per second. Conclusion: The Performance of the monitored virtual memory consists of the following parts: 1. when there are fewer page errors in the system, the best response time is obtained because memory caches are faster than disk caches ). 2. A small amount of idle memory is a good thing, which means that the cache is more efficient. unless swap device and disk are constantly written. 3. if the system continuously reports that the swap device is always busy, it means that the memory is insufficient and needs to be upgraded. zee: if the physical memory used for the buffer (buff) and cache is constantly increasing, and the idle physical memory (free) is continuously decreasing, it proves that processes running in the system are constantly consuming physical memory. The number of used virtual memory (SWPD) increases constantly, and a large number of page switches (SI and so) exist, proving that the physical memory can no longer meet system requirements, the system must swap the pages of the physical memory to the disk. The conclusion is as follows: the physical memory on the host cannot meet the requirements of system operation, and the memory has become a bottleneck of the system performance. PS: for Java programs, the memory bottleneck can be analyzed by using mat after heap dump. Disk bottleneck:Iostat to view Io information. If % util is close to 100%, it indicates that too many I/O requests are generated and the I/O system is fully loaded. This disk may have a bottleneck. In addition, pay attention to the value of iowait. A high value of iowait means that the disk is slow or the load is too large. Do not trust the svctm field. To monitor swap and system partitions, make sure that virtual memory is not the bottleneck of file system I/O. PS: You can use pidstat-D to locate disk bottlenecks. 2. how can we understand the relationship between CPU, memory, and disk? These subsystems are interconnected and mutually dependent. 1. For a process, data is stored in the memory, and the process runs on the CPU. The process reads and writes data to the disk. 2. When the memory is insufficient, page switching and swap switching must be performed with the disk to generate disk Io. Po and so release physical memory, Pi and SI increase physical memory usage. The process of switching pages requires CPU time. (High memory usage) 3. When the disk Io load is too high, you need to monitor swap and system partitions. Make sure that virtual memory is not the bottleneck of file system I/O. The disk is quite slow. When iowait increases, it means that the CPU spends a lot of time waiting for disk Io. At this time, the application processing of CPU bound will be affected (the disk Io is too high. how to Understand paging in/paging out? In Linux memory management, the above memory scheduling is mainly completed through "Page paging" and "swapping switching. The paging algorithm is used to change pages that are not frequently used in the memory to disks and keep the active pages in the memory for the process to use. The switching technology is to swap the entire process, not some pages, to the disk. The page writing process is called page-out, and the page re-returning from the disk to the memory is called page-in. When the kernel requires a page but finds that this page is not in the physical memory (because it has already been page-out), a page error (page fault) occurs ). When the system kernel finds that the amount of memory that can run is low, it releases some physical memory through page-out. Page-out management does not occur frequently. However, if page-out occurs frequently until the kernel Management page time exceeds the running program time, the system performance decreases sharply. At this time, the system is running very slowly or in the paused state, which is also called thrashing ). You can use vmstat-s to view the number of paged in/out instances. 4. How to monitor operating system resources? (An operating system can be used as an example) (I posted some content on my resume, and I am lazy about it) CPU monitoring: Top (utilization), uptime (number of running queues ), vmstat (number of context switches), jprofile (percentage of CPU time used by the method) memory monitoring: Top, free (utilization), vmstat (page and swap switching ), pidstat-R and Sar-B (page fault), jmap-heap (hedump), mat and jprofiler (view object) disk monitoring: iostat (% util ), top (iowait %), pidstat-D network monitoring: netstat (number of connections), nethogs (Traffic), Wireshark and tcpdump (packet capture) JVM monitoring: jstat (GC ), jmap (heap dump), jstack (thread dump), jprofiler and visualvm (analysis tool) nmon (global collection for a long time) Data) 5. How to understand memory management and thread scheduling? (An operating system can be used as an example.) 6. How to understand context switch )? (One operating system can be used as an example) Each CPU (or each core in multiple cores) can only execute one thread at a time, and Linux uses preemptible scheduling. That is to say, each thread is allocated a certain execution time. When the execution time reaches, there is Io blocking in the thread or the high-priority thread is to be executed, Linux will switch the thread to be executed, during the switchover, the current thread execution status should be stored and the state of the thread to be executed should be restored. This process is called context switching. For Java applications, when file I/O operations, network I/O operations, lock waits, or thread sleep are performed, the current thread will enter the blocking or sleep state, triggering context switching, too many context switches will cause excessive CPU usage by the kernel, reducing the application response speed. In vmstat, the CS column is 7. How do I understand disk io? (One operating system can be used as an example) the disk Io speed is very slow. In Linux, page switching is performed when I/O memory is insufficient as much as possible to generate disk I/O CPU bound applications, when the disk Io is too large and the iowait is too large, the performance will be affected.

Turn: Performance Testing of codenon

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.