Introduction
Not long ago, industrial standard testing practices (developed for the quality of the C/S architecture) it still focuses on the front-end function test of the client or the backend scalability test and Performance Test of the server. This "Work separation" is mainly due to the fact that the traditional C/S (Client/Server) architecture is relatively simple than the current multi-layer architecture and distributed environment. In the standard C/S architecture, the problem either occurs on the client or on the server.
Today, a typical computing environment is a complex, heterogeneous hybrid environment with its components andCodeComponents and code from legacy systems, self-developed or third parties, or standard (figure 1 ). With the development of Web, the complexity of the architecture is further increased. There is usually a content layer between one or more backend databases and the user-oriented presentation layer. This content layer can provide content from multiple services (these services are concentrated in the presentation layer), and may also contain some business logic that originally existed on the front-end of the traditional C/S architecture.
This increase in complexity is intertwined with legacy system integration and cutting-edge technology development to address software and system problems (including functional and scalability and performance issues) the description, analysis, and positioning of software has become a major challenge in the process of software system development and release. In addition, as SOAP/XML (Simple Object Access Protocol/scalability Markup Language) becomes a standard data transmission format, XML data content becomes more and more important for. NET platform and J2EE platform. Simply put, the complexity of the current architecture and computing environment has led to the elimination of the original C/S-oriented testing model.
Figure 1: current typical multi-layer architecture
Overall Quality Strategy
Obviously, a new and effective quality enhancement strategy is necessary for successful software development and deployment. The most effective strategy is to combine the testing of a single component in the environment with the overall testing of the environment. In this policy, both component-level and system-level tests must contain functional tests to ensure data integrity, it also includes scalability and performance tests to ensure acceptable response times under different system loads.
In terms of performance and scalability evaluation, these parallel analysis modes help you discover the strengths and weaknesses of the system architecture and determine which components must be checked when addressing performance and scalability problems. Similar functional testing strategies (that is, full data integrity verification) are becoming increasingly critical, as data may be derived from scattered data sources. By evaluating data integrity inside and outside the component (including any functional data conversion during processing), testers can locate each potential error, system integration and defect isolation are also part of the standard development process. End-to-end architecture testing refers to the concept of testing all access points in the computing environment, functional and performance tests are integrated in component-level and system-level tests (see figure 2 ).
In a sense, end-to-end architecture testing is essentially a "gray-box" testing, a testing method that combines the strengths of white-box testing and black-box testing. In the white box test, the tester can access the underlying system components and have sufficient knowledge of them. Although white-box testing can provide very detailed and valuable results, it cannot help detect integration and system performance problems ". In contrast, black box testing requires little or no understanding of the internal operating mechanisms of the system, but focuses on end users to ensure that users get the correct results in a timely manner. Black box testing usually does not specify the cause of the problem, nor can it ensure that a piece of code has been executed and runs efficiently, and does not contain any memory leakage or similar problems. Through the white box test and the black box test for "technology grafting", the end-to-end architecture test truly realizes the advantages of each other.
Figure 2: end-to-end architecture testing includes functional testing and performance testing for all access points
For scalability testing and performance testing, access points include hardware, operating systems, and applications.ProgramDatabases and networks. For functional testing, access points include the front-end client, intermediate layer, content source, and background database. With this understanding, the term "architecture" defines how components interact with users in the environment. The advantages and disadvantages of components depend on the specific architecture that organizes them together. It is precisely how an architecture responds to the uncertainty of the commands acting on it that requires end-to-end architecture testing.
To effectively implement end-to-end architecture testing, RTTs has successfully developed a risk-based automated testing method. The test automation process (TAP) is based on years of successful testing practices and uses the best automatic testing tool. This is an iterative testing method, including five phases:
- Project evaluation
- Test Plan creation and improvement
- Test Case writing
- Test automation, execution, and tracking
- Test result evaluation
The individual functional and performance tests required by the end-to-end architecture tests are carried out in the "test automation, execution, and tracking" phase. As shown in 3, this phase is repeatedly repeated, and the corresponding tests are refined during each iteration.
Figure 3 RTTs Automation Process for end-to-end testing (TAP)
Component-Level Test
Obviously, you must first develop a single component before you can "Assemble" them into a system. Because components can be used for early testing, end-to-end testing in the tap starts from component testing. In component testing, as the environment is established, appropriate tests are also carried out on different individual components. Both functional and performance tests are valuable in the component test phase and help diagnose various defects before and during the entire environment build.
functional testing in component testing
component-level functional testing verifies the transactions executed by each component. This includes the validation of the business logic of any data conversion and transactions processed by the component or system. In the development of application functions, the infrastructure test (Infrastructure testing) verifies and quantifies the data traffic throughout the environment, and in this way, performs both functional and performance tests. Data integrity must be verified when data is transmitted between components. For example, the XML test verifies the XML data content one by one transaction and verifies the formal XML structure (metadata structure) as needed ). For component testing, automatic and scalable testing tools such as IBM Rational robot can greatly reduce the time and effort spent on GUI testing and functional testing of non-Gui components. The rational robot script language supports calling external com ddls and is an ideal tool for non-Gui object testing. In addition, the new web and Java test functions attached to rational suite teststudio and rational team test provide additional functions for testing the J2EE architecture and writing test scripts using the Java language.
Component-level scalability test and performance test
In parallel with these functional tests, the component-level scalability test checks each component in the environment to determine its transaction (or capacity) limit. Once there are enough application functions to create business-related transactions, the transaction feature test (transcation characterization testing) is used to determine the quantitative descriptions of business transactions, this includes the consumption of bandwidth, backend CPU, and memory usage. Resource testing extends this concept to multi-user testing to determine the total resource consumption of applications and subsystems or modules. Finally, configuration testing can be used to determine which hardware, operating system, software, network, database, or other configuration changes can optimize performance. As with functional testing, effective automatic tools such as those provided by rational suite teststudio and rational team test can greatly simplify scalability testing and performance testing. In this case, the ability to create, plan, and drive multi-user testing and monitor resource utilization is the basis for the effective and successful completion of resource testing, transaction feature testing, and configuration testing.
System Level Test
After the system "assembly" is completed, the overall environment test can begin. Similarly, the end-to-end architecture test needs to verify the functions and performance/scalability of the entire environment.
System-level function test
Integration is one of the top concerns. Integration Testing checks whether the overall system has completed integration from the data point of view. That is to say, do hardware or software components that need to interact with each other communicate normally? If yes, are the data transmitted between them correct? If possible, data should be accessed and verified during the intermediate stage of system component transfer. For example, when data is written to a temporary database, or the data already exists in the message queue before it is processed by the target component, the data should be verified. Access to data at these component boundaries provides an important additional dimension for Data Integrity Verification and data issue description. If a data error is detected between two data transmission points, the defective component must be located between the two transmission points.
System-level scalability test and performance test
You can create a test to answer the following questions about the scalability or completion of the environment: How many users can access the environment at most when the system can maintain an acceptable response time?
- Can my high availability architecture work as designed?
- What happens after you add a new application or update an application in use?
- In initial use, how should the system configure to support the expected number of users? How should I configure it six months later? What about a year later?
We can only get some functions-is the design reasonable?
Answers to these questions can be obtained through certain testing technologies, including scalability/load testing, performance testing, configuration testing, concurrency testing, stress and capacity testing, and reliability testing, and failover tests.
In terms of system capacity, the overall environment test usually starts from the scalability/load test. This testing method gradually increases the load in the target environment until certain performance requirements such as the maximum response time reaches the limit or specific resources are exhausted. These tests aim to determine the upper limit of transaction processing and user capacity, which are often combined with other testing techniques to optimize system performance.
Performance testing is related to scalability/load testing. It tests specific business scenarios to determine whether the environment meets the configured load and transaction combination requirements. (Figure 4)
In parallel with component-level Configuration tests, system-level Configuration tests provide trade-offs for specific hardware and software settings, it also provides the measurement standards and other information required for effective resource allocation.
Figure 4: Performance Test: Can the system be executed as required when the system has a specific user load?
Concurrency Testing. It identifies and measures the system lock and deadlock levels, as well as the use of single-thread code and lock signals in the system. From a technical point of view, concurrency testing can be considered a function test. However, it is often used with Scalability/load testing because it requires multiple users or virtual users to drive the system.
Figure 5: concurrent tests can identify deadlocks and other concurrent access problems
Stress Testing (figure 6) tests the system to determine whether its behavior has changed when the system reaches saturation (such as CPU usage or memory consumption, or whether it will adversely affect the system, application, and data. Volume testing is associated with stress testing and scalability testing. It can determine the transaction capacity that the entire system can process. The stress and capacity tests show the system's elasticity when dealing with sudden increases in access traffic or ongoing large-capacity activities. This does not include failures caused by memory leaks or queue overflow.
Figure 6: stress testing can determine the effect of high-capacity use
Once the application environment starts to work and the performance is optimized, a long-term reliability test can be performed under the environmental utilization of 75% to 90% ), used to discover any problems related to long running time. Failover testing (Figure 7) in an environment where redundancy and load balancing are applied) analyze the theoretical failure process and test and measure the overall failure transfer process and its impact on end users. Essentially, the Failover test answers the following question: "If a specific component fails to run, can the user continue to access and process it with minimal interruptions? "
Figure 7: failover test: What happens if component x fails?
Finally, if third-party software is used in the environment, or components provided by the host supplier and other external sources, the SLA (Service Level Agreement) the test can be used to ensure the end user response time specified in the contract specifications of both parties, as well as the inbound and outbound data volume. A typical protocol usually specifies the activity capacity within the specified time range and a specific maximum response time.
Once external data or software is in place, it is wise to continuously monitor these sources so that you can quickly take remedial measures when the problem occurs, minimize the impact on end users.
Similar to component-level scalability tests, rational suite teststudio, rational teamtest, and other similar tools provide advanced and multi-user testing capabilities, they can be used to efficiently perform most or all of the above scaling and performance tests.
A practical example
Perhaps an example is the best way to explain. Consider the following:
Build a public web bookstore through eretailer and use four types of web services provided by content in its content layer. The first service provides a directory, including the title, introducer, and author. The second service provides the current inventory information for all products. The third is the price server, which provides the product pricing information, and provides the freight and tax information based on the purchaser's location and completes the transaction. The last service is used to save user files and historical purchase records.
The presentation layer converts requests input through the ui gui into XML and sends them to the corresponding content server. Then, the response XML is converted to HTML through the presentation layer and serves user sessions. Services at each content layer update other services as needed. (See figure 8) for example, when a user's historical purchase record changes, the price server must update the corresponding user archive service.
Figure 8: Access Point of a typical eretailer Application
For the above system, the starting point of an end-to-end test policy is to simultaneously test the application function and scalability/load of each service on the Content layer. The XML request is submitted to each type of content service, and the corresponding response XML document is captured to evaluate its data content or response time. As these content services are integrated into the system one by one, functional testing and scalability/load testing can also be performed in the integration system by submitting transactions to the web server. Transactions can be verified throughout the site, whether for functional testing (using SQL queries) or for scalability/load testing.
During system development, a single test applied to all access points can be used to coordinate various services so that they can be normally run throughout the system-regardless of the data content (functionality) or performance (scalability. When a front-end discovers a problem (for example, using a browser), the test cases and data originally used to test a single component can help us quickly locate the error location.
Advantages of Network Modeling
As part of the design process, modeling for different network architectures can expand the advantages of end-to-end testing no matter before hardware acquisition or in the initial test phase. Because it can help design more effective and low error rate networks. Modeling of network infrastructure before deployment can help identify performance bottlenecks and errors in route tables and configurations. In addition, the application transaction evidence obtained during testing can be input into this model to identify and separate potential problems in the application's "chattiness" and infrastructure.
Conclusion
End-to-end testing tests and analyzes the computing environment from a general quality perspective. The scalability and functionality of each component are tested in a single test and integration test in the development phase and the quality evaluation at the early stage. This provides diagnostic information for the effectiveness of development and a high level of quality assurance for the release of the system. End-to-end testing provides a comprehensive and reliable solution for managing the complexity of today's architectures and distributed computing environments.
Of course, when a lot of tests and analysis are required, end-to-end tests require considerable expertise and experience to organize, manage, and practice. However, from a commercial perspective, organizations that perform end-to-end testing on applications can be highly assured of application software, system performance, and reliability. In the end, these organizations will benefit from quality improvements: better customer relationships, lower operating costs and huge revenue growth.
In the past six years, as a partner of IBM Rational, RTTs has developed and improved its own end-to-end testing methods, together with hundreds of customers, Alibaba Cloud strives to ensure the functionality, reliability, scalability, and network performance of applications. Welcome to the RTTs website www.rttsweb.com ..
References
- For more information, see the original article on the developerworks global site.
Author Profile Jeffrey bocarsly, RTTs, department manager for automated function testing. |
Johanthan Harris, RTTs, Scaling Test Department Manager
|
Bill hayduk, RTTs, head of professional service department
|