A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
Software TestingAs a part of software engineering, with the emergence of the industrialization of software production, it is a dynamic monitoring process of software production. It detects the whole process of software development and can detect and report problems at any time, and re-assess new risks, set new monitoring benchmarks, and continue. Software testing is a process of software quality control and an activity to evaluate the potential risks in the software system. Its purpose is to monitor and eliminate defects, to ensure that software products meet quality requirements in terms of availability, functionality, and operability.
At present, software testing has evolved from passive monitoring and error detection to from software quality control (SQC, software quality control) to Software Quality Assurance (SQA, software Quality Assurance) enables software testing to cover the entire software development process from pure Defect Detection and discovery, this avoids the huge risks caused by defects in software requirements and design during software development. A typical software process can be divided into test requirement analysis, test design, test execution, defects andConfiguration ManagementProcess and so on. InSoftware Testing TechnologyHas been refinedUnit Test, Integration testing,System TestAnd user acceptance tests. Today, the quality of software products is demandingPerformance TestingTechnology is particularly important.
Overall View of Software Performance Testing"
The purpose of software performance testing is to check whether the system or system components meet the performance indicators specified in the requirement specification and meet performance-related constraints and restrictions, it must specify the performance (for example, speed, accuracy, and frequency) of the system or system components.
Performance tests are usually executed in the system test phase and often combined with strength tests. Generally, test tools are required. Multiple evaluations can be used to evaluate the performance and behavior of a test object. These evaluations focus on obtaining behavior-related data, such as response time, timing configuration files, execution streams, operation reliability, and restrictions. These evaluations are mainly carried out in the evaluation and testing activities. You can also use the performance evaluation to evaluate the test progress and status during the execution of the test activities.
Performance needs to be tested under various conditions, including:
● DifferentWorkQuantity and/or system conditions.
● Different use cases/functions.
● Different configurations.
● Performance requirements are described in the Performance description section in the supplemental specifications or requirement specifications.
When performing a test under the preceding conditions, pay special attention to the following information and generate at least one test requirement for each statement that reflects the information:
● Time statement, such as response time or timing.
● Indicates the number of events or use cases that must occur within the specified time.
● Compare a performance behavior with another performance behavior.
● Compare the application behavior under a configuration with that under another configuration.
● Operational reliability over a period of time (average failure time or MTTF ).
● Configuration or constraints
Software Performance Testing mainly includes the following aspects:
● Dynamic monitoring: the status of the testing scripts being executed is obtained and displayed in real time during the test execution.
● Response time/throughput: evaluate the response time or throughput of the test object for specific subjects and/or use cases.
● Percentile Report: percentile evaluation/calculation of collected data values.
● Comparison report: the difference or trend between two or more datasets representing different test execution conditions.
● Tracking report: Message/session details between the main character (test script) and the test object.
"Method View" for Software Performance Testing"
The software performance testing methods can be selected based on different testing purposes, as shown in the following table:
According to the test content, the performance test mainly includes the following aspects:
1. Response Time Test
● Response time test, usually refers to the client response time when a single user is operating normally, as well as the intensity test, load test,Stress TestingThe client response time when combined.
● Execution time of functions, methods, objects, and child routines.
● Frequency and nesting of functions and methods.
● The time it takes to run a specific module, execute or process specific data in a specific path.
● Processing accuracy.
● If the two running times differ by more than three times, problems may occur.
2. Strength Testing (stress/Load Testing)
The intensity test needs to run the system in an abnormal quantity, frequency, or resource mode to test the maximum actual capacity of the system, it requires the software to be forced to run under the limit of its design capabilities.
3. Software Reliability Test
Errors frequently found in such tests include out-of-bounds pointers, memory leaks, stack overflow, and incorrect interaction between more than two features, also known as long sequence testing), duration testing, and endurance testing ). The test lasts for a long time. The objective is to discover program test omissions.
Software with poor reliability, such as frequent and repetitive failures during execution, cannot work stably.
The purpose of software reliability testing is to provide a quantitative estimate of reliability."Metric View" of Software Performance Evaluation"
Generally, the following methods can be used to measure the software performance test:
1. Software Reliability (r) Indicators
A quantitative description of Software Reliability refers to the probability that the software correctly implements the specified function at a certain time point in its running section under specified conditions and within a specified time.
A quantitative description of Software Reliability refers to the expected working time of a software product in a certain configuration state in its specified running section and the software fault strength.
The ability of a computer system or subsystem to implement its functions. It is generally measured by the ability to perform its functions on computer systems or subsystems. For example, response time, throughput, transaction processing count, and usage.
"Instance View" for Software Performance Testing"
In order to give readers a deeper understanding of performance testing, the following uses the software running on mobile phones as an example to describe the application of performance testing methods in the actual software development process:
Mobile phone performance testing methods can be divided into manual testing and automatic testing.
Manual tests are performed manually by testers, and some monitoring instruments and tools are used to verify the performance of mobile phones. However, due to the large number of mobile phone functions, many performance tests need to be carried out repeatedly, and the workload is heavy. This requires a lot of testing time and may also cause omissions in the tests, the accuracy and efficiency of performance testing cannot be guaranteed.
With the help of the PC platform, many powerful and general automatic testing tools are available, such as winrunner, robot, and LoadRunner, which must be developed twice, in order to make the automatic testing tool compatible with embedded systems such as mobile phones.
The automated performance test of mobile phones is generally carried out in the following steps:
1. System Analysis
Converts system performance indicators into performance testing targets. In this step, we usually need to analyze the structure of the tested system and work out a specific performance testing implementation scheme based on performance indicators. Testers are required to have full control over the structure and implementation of the tested system.
2. Create a virtual user script
Converts a business flow to a test script, usually a virtual user script or a virtual user. Virtual users simulate real users by driving a real customer program. They need to confirm and record various tested business processes from the beginning to the end, and analyze the details and time of each step, convert it into a script accurately. It is similar to creating a robot process that can mimic human behaviors and actions.
3. Create test scenarios based on user performance indicators
The generated test scripts are copied and controlled based on actual business scenarios. Rules and constraints are set for script execution, which is converted into a test case set meeting the performance test indicators. This step requires detailed analysis based on user performance indicators.
4. Run the test scenario to synchronously monitor application performance
During performance testing, real-time monitoring enables testers to understand the performance of applications at any time during the testing process. Every part of the system must be monitored: protocol stack, MMI application, memory usage, and driver running status. Real-time Monitoring can detect performance bottlenecks early in test execution.
5. Performance Test Result Analysis and Performance Evaluation
Based on the test results, the performance of the system is analyzed, and the performance bottleneck of the system is accurately located. Mathematical methods can be used to calculate and count large volumes of data to make the results more objective. In performance testing, it should be noted that the performance testing solution that can be executed is not necessarily successful. The key to success or failure lies in whether it accurately simulates the real world.
Throughout the performance test process, the selection of automated testing tools can only affect the complexity of performance test execution, but Analysis and Thinking by people will directly lead to the success or failure of performance testing.
In short, there are many methods for testing software performance, and the evaluation indicators for different methods are also different. It is flexible to meet your needs, development models, and development processes.
Start building with 50+ products and up to 12 months usage for Elastic Compute Service