Simplify J2EE-based projects using Rational Tools Part 1: product-based development and testing

Source: Internet
Author: User
Tags home screen

This article demonstrates the series of Rational tools used in distributed, J2EE-based projects.Article(As listed below.

    • Part 1: Project Introduction and high-level planning
    • Part 1: risk management and demand management
    • Part 1: model creation and access control; Requirement Analysis
    • Part 1: Refined use cases, production reports, and selection of tools and technologies
    • Part 1: architecture and design
    • Part 1: Detailed design; early development; Bidirectional Engineering; early Unit Testing
    • Part 1: Continue development; early build; Demonstration
    • Part 1: unit test policy; function test; GUI test script
    • Part 1: system construction and testing; defect tracking; Product Delivery
    • Part 1: project completion; Conclusion; Future Work

In this article, we are a fictitious software company lookoff technologies inreceivated. Our customer audiophile speaker design, Inc. (ASDI) hired us to fulfill their initial it needs. For more information, see section 1st.

So far, we are nearing the end of the first phase of the ASDI project. ASDI has received a series of system demos we provide, and they are very satisfied with the product. (In fact, we have some concerns, because the first stage of the project is already so available, we are worried that ASDI will postpone or cancel the next stage of the project .) The final factor to customer satisfaction is that we have conducted system tests and acceptance tests that fully meet the requirements.

Part 1 snapshots

The tools and technologies demonstrated in part 1 are as follows:

  • Rational ClearQuest-Track and manage system test problems and defects during integration and test cycles
  • Rational siteload-Test the load of our web applications by simulating a certain number of concurrent users to access the system.
  • Rational Robot-Recording and playback of VU scripts for the B2B interface of the Load Test

Products created or updated:

  • ClearQuest defect Database-Defects created to track shared by all team members stored in an accessible network storage
  • Siteload and robot test scripts-Created for execution of Automated Testing

Packaging Development

At this time, our coding work has been significantly reduced. During the modification and refinement of products, our team mainly focuses on some small changes. When the integration and testing (I & T) team discovers Software defects, they fill in and prioritize defects in the Rational ClearQuest database on top of the defect tracking-based database model. These defect reports are reviewed by the engineering team. Team leaders and project engineers usually determine the priority of defects and maintain a plan that describes who will fix the given build version.

Build frequency

The frequency of executing a complete system build version-building a system from scratch in a clean environment-significantly draws us closer to the end of phase I projects. It is initially planned to build once a month, but these builds are sometimes performed once a week. This is for large projects or teams that lack highly skilled developers, the daily expenses for building a clean environment, fromSource codeIt is not feasible to build the system and test the system when software is obtained in the database. However, thanks to the close integration of the tools we use, the creation process of our good documentation, and the use of rational testing tools to quickly complete our test execution, we can increase the frequency of building the system to once a week.

Automated scripts especially make it practical to build so frequently. At least we run the script to test the system part affected by the defects we have solved. We can usually go further and run scripts for most or all of the systems. We are very lucky to be able to test the system module through automated testing scripts; rational testing tools can be very well suited to the technology we use, however, the combination of some other technologies may bring challenges to the use of test scripts or cannot be used at all.

Integration Testing (I & T) Team intervention

So far, we have reached the system building stage, and the I & T team has fully invested and led this process. In the development cycle of the project (in part 6), we introduced the I & T team to the project team for full-time work, so they are ready now. In our previous project, we introduced the I & T team to the project late, almost approaching the end of the development cycle, but this will produce some problems. We finally realized that the I & T team needed some preparation time, just like the technical members of other project teams. Although the I & T team is slower than the system team in terms of system assembly for our earlier build, this is part of the learning process we expected from them, and allow them to build freely, while the engineering team can continue their development work.

The I & T team defines the expected goals based on their understanding of the progress of the engineering team. They discuss the build with the project engineers to ensure that the build meets their expectations. For example, if some components or subsystems are not ready, the engineering team is not ready for functional testing. For Assembly, check every compiledCode, The matched interfaces are consistent with those of third-party tools (JDK, library, and purchased tools) and subsystem team members. Building is just a simple exercise activity. For systems that are very mature, the team will perform load tests or functional tests based on specific aspects of the system.

Close to the end of the development cycle, I & T leadership is a very important role in a project team-even more important than team leaders or project managers. I & T Lead specifies a plan for test execution, understands weaknesses and strengths in the system area, and constantly monitors Defect analysis data. At the same time, he also manages full tracking ing of requirements, components, threads, test scripts and defects, which can help him plan and prioritize the actions of his team.

Repair defects

Fixing defects is not a trivial task. Each defect will cause the following problems:

    • Is this really a defect?
    • What type of defect is it? It can belong to any of these categories: decorative, undesirable, data loss, documentation, operations, installation, loss of features, performance, stability, unexpected behavior or unfriendly behavior.
    • Do we need to fix it now or postpone it?
    • Are the related requirements incorrect?
    • How can we fix this defect?
    • How fast should we fix it?
    • If the software is repaired, what other parts of the software will be affected?
    • Who should fix it?

No matter when the I & T team finds a defect in the system, they will submit it in the ClearQuest database, and they will be able to obtain many of the details mentioned above in the database. They connect to the database in ClearQuest and fill in defects in the defect submission form, as shown in 1.

Figure 1:Submit a defect
(Click to enlarge)

Defect databases can be shared by all engineering team members and accessed over the network at any time. For example, the owner of the submitted defect in Figure 1 works in the search capability of the product and is in the same group as the Member in charge of user information of the search team. The I & T team uses a ClearQuest query to keep track of any assigned open test problems. Figure 2 shows the search results of the search team. The defects submitted in Figure 1 are displayed in the result set. The query results (the results always reflect the latest content of the database updated by the I & T team) are filtered out, including only the defects of the search team, and sorted by priority and submission time.

Figure 2:Query results of filtered Defects
(Click to enlarge)

Other teams query ClearQuest in their respective regions in a similar way and receive results that are properly filtered. Only the I & T team, project process engineer, and team lead can see the complete list of defects in the project progress.

The actions button in Figure 2 provides options for modifying, allocating, disabling, copying, postponing, or deleting the currently selected query results. Different people can perform different operations based on the positions of project team members:

    • Project Team members can only modify or assign defects. For example, a search team lead can assign a defect to a specified member of his team.
    • The I & T team can perform all the actions: Submit defects in the database, modify, assign, close, copy, delay or delete defects.
    • Project Engineers also have the permission to execute all actions, although the close actions are always performed by the I & T team.

System Test

The system test uses the recorded test script to test the entire system. This test is important not only to the customer's acceptance test, but also to thoroughly test the system and provide insights on defects in the test script. No matter how closely we track changes, there will often be some small changes that are beyond our expectation, resulting in some conflicts, affecting other code snippets in an unexpected way.

We usually build systems in a clean environment established by the I & T team, so we thoroughly tested the build documentation. If any errors are encountered in the build document, a problem (in the "document" category) along with any identified test defects will be submitted to the defect database.

From the unit test phase (discussed in Part 1) to the present, we have tested the functional and non-functional requirements of the system, now we have invested more energy in the non-functional requirements of the system than in the early testing phase. The main non-functional areas of our testing are availability and performance (load testing), which we will discuss below.

Availability recommendations

We use our only availability experts. Although she was included in early User Interface planning and simulation work to help man-machine interface work, she was not included in our and other members of the I & T team. Her current job is to work with the system as a user, identify availability issues, and submit issues to the defect database (usually in the "unfriendly behavior" category ).

Some availability issues must be postponed because they are beyond the scope of the first phase or are very costly to handle. However, many small availability problems that are easy to fix and will bring chaos to customers have been discovered. This is part of our most cost-effective testing activities, because from the customer's perspective, the product can be greatly improved with some small code changes. The availability suggestions include new error information, layout improvement, adjustment button title and menu, document arrangement, and screen workflow modification.

Load Testing

Our system has no demanding performance requirements, but we want to make the system available under the maximum load. We did some load tests early, but this test type will reach its highest point near the end of the development cycle. We want to ensure that the system can surpass ASDI's expectations. We hope that we can get the next work in the form of the second phase of the project, and we do not want to cause performance problems. We tested the load of two parts of the system: the Web application interface and the B2B interface through the command gateway based on SSL/XML (introduced in section 5th.

Web Load Testing

For Web load testing, we use rational siteload. This tool enables us to record scripts consisting of a series of web transactions we execute, and then copy these steps as multiple virtual users. We checked with ASDI the expected load mode to determine the number of users simultaneously accessing the web application. We decided to test the load of 20 users.

By using siteload, we can easily simulate concurrent users of 20 systems and accurately count related system performance. When we start siteload, it starts our browser and prompts us to create a test or run an existing test (see figure 3 ).

Figure 3:Siteload home screen
(Click to enlarge)

When we choose to record a new script, siteload will pop up a Java-based browser and record all the actions we do on it. For example, when we browse our partsearch in this browser. when a JSP page is created, siteload loads the page from the server (as shown in Figure 4) and records our actions, including any input data values and Button clicking actions. We designed this specific test to perform many database query operations based on multiple parameters. This is obviously a simple test, because the database can cache queries; other tests will be more rigorous and challenging.

Figure 4:Siteload records browser actions
(Click to enlarge)

For every script we recorded, we can also set some common performance requirements for testing. We decided that when the script for testing the partsearch. jsp page was played back, we wanted at least 90% of the pages to be loaded in four seconds or less (see figure 5 ). Although this does not meet high performance requirements, it is sufficient for the availability and overall quality of our system.

Figure 5:Set siteload Performance Requirements
(Click to enlarge)

Figure 6 shows how to configure siteload to simulate the repeated execution of the partsearch. jsp page action recorded by 20 concurrent users. Siteload is able to perform more complex performance modeling to help us find out the "performance wall" of our system, but we chose to perform a maximum test of 20 concurrent users in our first attempt. If we encounter problems, we will reduce the initial number of users to 5, and add a user every minute or two minutes. We can also set a standard for terminating the test, but we didn't do this because we are visually monitoring the test process to understand what behavior our system has.

Figure 6:Set siteload user features
(Click to enlarge)

When the test is running, siteload displays and constantly updates the statistics shown in figure 7. In this example, a bar chart shows the performance results of our test script; almost all pages are loaded and executed within 8 seconds (in other words, only 0-20% in our 4-Second performance limit ). Using the options provided in the green menu bar near the top of the screen, we can choose to view more detailed test reports, such as page access, CPU load, and average load time.

Figure 7:Test results of siteload
(Click to enlarge)

B2B Load Testing

For SSL/XML B2B load tests, we use rational robot to record virtual user (VU) scripts. We enter the commands we want the robot to execute to monitor and generate the most complete script. This script is very different from the GUI script generated by the robot (discussed in section 8th. Unlike GUI scripts, VU scripts record and receive low-level information related to transmission information. By running the VU script from multiple machines, we can simulate the B2B Client Session of the concurrent operating system. ASDI suspects that two additional concurrent sessions may occur, so we can complete some steps beyond this requirement to ensure good system performance.

Test completion check

During the last two weeks of our plan, the engineering team conducted a system test and passed all the requirements, and the I & T team closed all the problems, aside from some unimportant issues that we have discussed with our customers that can be postponed. For us, the next milestone is the test completion check (TRR). We executed two TRR: one is internal and the other is with the customer. The following columns of the internal TRR check are implemented:

    • Have all the documents been completed?
    • Have all the Code been checked in and tested?
    • Are all code reviews and unit test lists archived for future verification?
    • Are there any questions about tests that have not been postponed?
    • Are all change requests closed or merged to meet the requirements?
    • Are all Rational Rose models documented and suitable for delivery?
    • Has all aspects of the system been demonstrated to the customer?

In addition to checking the above items, we also reviewed and pre-arranged a system demonstration. The entire demonstration serves as the final presentation of the product for external TRR with the customer. We are proud of the products we built and want to ensure that all key features of the system are displayed, as we know that senior management of ASDI will attend this external TRR.

During the external TRR period, we follow the same schedule as the internal TRR. We reviewed the check list with ASDI to show that everything was finished and ended the demo. Not surprisingly, there were more ideas in the final demonstration, and we noted them down for future considerations.

Acceptance Test

In order for ASDI to agree that the first stage of the project has been successfully completed, the system must pass some final acceptance tests. We have reason to believe that the system can pass these tests because we have compiled a set of scripts for the end-to-end testing of the execution system to check whether all requirements are covered. These scripts were created using Rational robot and thoroughly checked by ASDI. The only thing we can think of to prevent us from successfully performing the acceptance test is that the changes made by the engineering team at the last point may affect other code snippets. However, before we started the acceptance test, we received a surprising message that ASDI told us in the external TRR that they wanted us to perform the acceptance test manually. We believe that our intention to use the test script in the acceptance plan is very clear, but now we are aware that our wording in the plan is vague.

When we state in the external TRR that cat (Customer Acceptance Test) will be quite short, and we can execute and check the execution results of the script very quickly, ASDI indicates they want to see all step-by-step test execution so that they know what is going on. Although we do not want this, it seems fair and feasible to us. We documented all the test processes and plans for our test scripts, and even maintained our test plan when the script was upgraded. Therefore, manual testing is not a problem for us.

What we didn't prepare was how long it would take to manually perform the acceptance test. Now we perform manual tests based on our documented testing process, and realize how much time is saved for automated testing.

We found that some required details were lost during our testing. We don't always get enough information to create a clear and reusable test. We also realized that sometimes we updated the test script but did not modify the test plan. After a small change to the test plan, we deliver the test plan to the customer for a quick check; we agree that the change is very small and does not require any other TRR.

The acceptance test occurs in our development environment. It begins to clean the build based on the build document and triggers the execution of the test process. These tests will take about two to one day and a half. Three members of our team perform these tasks (I & T Lead, project engineer, and team lead), and three members from ASDI (QA, Project Manager, and technical lead ).

We are proud that there were no Software defects during the acceptance test. There are just some unimportant issues, usually in the "document" and "unfriendly behavior" categories. All the requirements are met, and the customer is very happy at the end of the test.

Summary

This may be the first project that our team does not have to spend a lot of time working at the end of development and testing. One of the factors contributing to this is that we have better tools, familiarity with technology, and an engineering team that worked together early in the project.

In particular, the test process has a great success. Perhaps the most impressive thing about Rational tools is the functionality of testing. This is the first time we have introduced automated testing and we are surprised that it works so well. The biggest pain point of automated testing is the modification of the script for the change in requirements; however, this responsibility is passed to the integration and testing team, so it will not affect the work of our engineering team.

Plan the future

Now we have installed software in the customer's environment and gave them some time to evaluate the system. Although there is no formal assurance phase, ASDI has been submitting issues to the ClearQuest database. Finally, an agreement on whether to conduct the second stage of the project must be reached.

Major Risks

At this time, we feel that there are no major risks. We are confident that all major problems have been solved, and we have fully prepared any problems that may occur in the remaining part of the project.

About the author

Steven Franklin has a broad background in software design, architecture, and engineering processes. These experiences are often used in large distributed information management and control systems. He has been using Rational tools since 1997 and is primarily interested in XML, J2EE, wireless, and software engineering technologies. You can contact Steven via steve@sfranklin.net.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.