Overall web application testing

Source: Internet
Author: User
Tags rounds
With the increasing popularity of the Internet, there are more and more large applications based on the B/S structure. It is becoming increasingly urgent to test these applications. Many testers wrote to me about how to perform the B/S test. Due to busy work, the question raised by me was a headache, there is no overall overview of the Web testing process. We hope that this article will help you understand how large web applications are tested.

The functional test in B/S is relatively simple, and the key is how to do a good job of performance testing. At present, most testers think that it is okay to run some testing tools to prove that my product can achieve performance. To prove that testing is of no value, the key is to discover product performance defects, locate problems, and solve problems. This is what testing is to do.

First, we will analyze how to perform Web Testing from two aspects. In terms of technical implementation, the general B/S structure, whether it is. net or J2EE, both of which are multi-layer architectures, including the interface layer, business logic layer, and data layer. In terms of the test process, the first is to discover, analyze, and locate the problem, and then the developer will solve the problem. How can we test the B/S structure?

How to find the problem is my first introduction. Before conducting a web test, you need some information, such as the product function manual and performance requirement manual, which may not be perfect, but must be available, it is basic knowledge to clarify the test objectives, but I often see that I have started the test, but I still don't know what performance indicators my system will achieve. Here I will briefly introduce the performance indicators of the test:

1. General metrics (required tests for Web application servers and database servers ):

* Processortime: indicates the CPU usage of the server. Generally, when the average value reaches 70%, the service is close to saturation;

* Memory available Mbyte: the number of available memory. If the memory is changed during the test, you must note that the memory leakage is serious;

* Physicsdisk time: physical disk read/write time;

2. Web Server metrics:
  
* Avg rps: Average response times per second = total request time/second;

* AVG time to last byte per terstion (mstes): the average number of iterations of the service role per second. Some people will confuse the two;

* Successful rounds: Successful requests;

* Failed rounds: Failed requests;

* Successful hits: Number of successful clicks;

* Failed hits: the number of failed clicks;

* Hits per second: the number of clicks per second;

* Successful hits per second: the number of successful clicks per second;

* Failed hits per second: the number of failed clicks per second;

* Attempted connections: number of attempted connections;

3. Database Server indicators:

* User 0 connections: number of user connections, that is, the number of connections to the database;

* Number of deadlocks: Database deadlock;

* Butter cache hit: database cache hits;

The above indicators are only some common indicators, which play a leading role. You must make corresponding adjustments for different applications. For example, the program is used. NET technology, you must add some targeted testing indicators. For more information about these metrics, see systemmonitor and LoadRunner and act in windows. The indicator setting is very important for identifying problems. It will help you identify some qualitative errors. I will not conduct too much analysis on qualitative stress tests, and there are many tools. The popular tools include LoadRunner, act, was, and webload. Each tool has its own scope of use, among them, I think LoadRunner is the most comprehensive. It provides support for multiple protocols and is competent for complex stress tests. Was and Act provide better technical support for Microsoft, was supports distributed cluster testing, while Act is the same. net integration is better, supports viewstate (.. Net Control cache Support) test. At that time, other test tools were not supported yet. Now we should support it.

At this stage of testing, you need to constantly change the testing target of the Data coefficient. At the beginning, because the system is too large, we need to divide it into several subsystems, the performance objectives of each subsystem must be clearly defined. The main purpose is to set a threshold value for the concurrency indicator. At the same time, some system-related test parameters should be set for the application server and database server, perform in-depth analysis on subsystems that do not reach the threshold and have problems with some common parameters. For example, its development does not meet your requirements. It proves that the performance of the subsystem is faulty, the database user connection is too high, and the program does not release the user connection.

In this case, we need to perform a detailed test on the subsystem. Because the image request has a great impact on the performance in the B/S structure, we need to perform the test on the subsystem in two parts, 1. Non-program parts, that is, images, etc.; 2. Application itself. Through the separation of transactions or functions, you can test the two parts separately. For specific practices, refer to the Manual of each tool. I will not describe them here.

You have higher requirements on the testing parameters of the subsystem. It helps you locate problems precisely, such as exceptions, deadlocks, and network traffic, at the same time, you should note that the collection of test parameters has a great impact on the system performance, so generally there should be no more than 10, and the overall performance test indicators just introduced should not be increased much, this will reduce the impact. At this stage, it should be noted that the data volume of the database will greatly affect the performance. Therefore, we should simulate the corresponding data volume in the database based on the previous performance requirement specification for testing, in this way, we can have a higher level of credibility.

The above is about the discovery of the problem. The following is the analysis of the cause of the problem. This step requires a lot of requirements and is generally done by testers and programmers. Of course, if you have considerable development experience, it is even harder to perform tests in this area. Next, let's talk about how to precisely locate the problem. There may be many possibilities of problems, which are roughly divided into the following types:

1. performance cannot reach the goal;

2. performance is achieved, but there are some other problems, such as exceptions, deadlocks, low cache hits, and high network traffic;

Iii. server stability problems, such as memory leakage .......

To discover these problems, you must have a satisfactory performance analysis and optimization tool, such as Microsoft's. net has its own development tools and similar tools for borland's Java development tools, but I personally think the better tools are purify and quan.pdf under rose, this is mainly because of his. net, Java, and C ++ are supported, and the analysis results are particularly professional. Let's first take a look at rational purify, rational purify can automatically identify memory-related errors in Visual C/C ++ and Java code to ensure the quality and reliability of the entire application. Find traditional memory access errors in typical visual c/C ++ programs and errors related to garbage collection in Java and C # code; rational quantity is a powerful tool for function-level performance analysis. You can use it to obtain the time, percentage, and number of function calls and the time occupied by subfunctions from a graphical interface, this allows you to quickly locate performance bottlenecks.

Let's first talk about performance optimization and Exception Handling. There is a principle in performance optimization, that is, optimization with the largest proportion of time is the most effective, for example, the execution time of a function is 30 seconds. If you optimize it by one hundred times, the execution time is 0.3 seconds, which is increased by 29.7 seconds. If the execution time is 0.3 seconds, after the optimization, It is 0.003 seconds, and the actual increase is 0.297 seconds. The improvement effect is not obvious, and anyone who has written the program knows that the latter has a higher performance optimization cost. In the process of performance optimization, it is generally the first database, then the program, because the database optimization does not need to modify the program, the risk of modification is very small. But how can we determine whether it is a database problem? This requires skill. When using quantity, you can analyze it all the way, and most of them will eventually find that the database query function takes a large amount of time, for example, sqlcmd. executenoquery and other databases to execute functions, then you need to analyze the database.

The database analysis principle is to first index, then store the process, and finally optimize the table structure view. Index optimization is the simplest and most effective method, reasonable Use may lead to unexpected results. Here I will give you a brief introduction to my favorites, sqlprofile, SQL query analyzer, precise, and sqlprofile, which are an SQL statement tracker, you can track the SQL statements and stored procedures used in the program process, and analyze the SQL statements with the query analyzer to make good judgments on the INDEX OPTIMIZATION, but the index is not omnipotent, when adding, deleting, and modifying more tables, too many indexes will cause the performance of these operations to decline. Therefore, some experience is required to judge.

At the same time, it is most effective to optimize the SQL statements with the highest usage frequency. At this time, I need precise, which can observe the execution of a certain SQL statement for a long time. After exploiting the potential of database optimization, if the performance requirements are still not met or there are still problems, optimization should be performed from the program. This is what programmers do. What testers do is to tell them, which function execution causes performance degradation, for example, too many exceptions, too many loops, or too many DCOM calls, but it is not easy to persuade programmers, if you want to do well at this stage, you must have several years of programming experience, and it is not easy to make programmers feel that your performance will be improved.

Memory analysis is generally a long-term analysis process. It is difficult to do so. First, we need to prepare for long-term competition. Second, the analysis of Memory leakage should be conducted in a unit test in sync, instead of waiting until the end to discover the problem, of course, the problem has to be solved. Generally, this type of problem is exposed only after the server has been running for a long time. Once the problem is found, you need to locate the problem, the analysis principle is that subsystems run independently of each other to find the system set with the minimum problem, or use memory analysis tools to observe the memory objects, locate the problem initially, and use purify for runtime analysis, generally, C ++ memory has many problems, such as Java and. net is relatively small, generally caused by unreasonable GC. C ++ has many memory errors. The following are common examples:

1. Array bounds read (ABR): array out-of-bounds read

2. Array Bounds Write (ABW): array out-of-bounds write

3. Beyond stack read (BSR): stack out-of-bounds read

4. free memory read (FMR): idle memory read

5. Invalid Pointer read (IPR): Invalid Pointer reading

6. NULL pointer read (NPR): NULL pointer reading

7. uninitialized memory read (UMR): Memory read/write is not initialized.

8. Memory Leak: Memory leakage

Note: For more information, see the help information of purify.

By the way, I would like to explain why it is better to do this during unit testing. Because unit testing is aimed at a single function, memory analysis based on unit testing cases will help you locate problems more quickly, at the same time, because of the early detection of the problem, the risk in the later stage will be reduced. Of course, if you combine the code overwrite tool purecoverage, it will be more perfect.

Note: This article only describes the testing process of the B/S application. It only gives a rough introduction to the tools used in a certain stage, you can also use tools you are familiar with to achieve the same goal.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.