Transferred from: http://blog.csdn.net/mengyao/archive/2008/05/22/2468117.aspxvwebtest
With the increasing popularity of the Internet, there are more and more large applications based on the B/S structure. It is becoming increasingly urgent to test these applications. Many testers wrote to me about how to perform the B/S test. Due to busy work, the question raised by me was a headache, there is no overall overview of the Web testing process. We hope that this article will help you understand how large web applications are tested.
The functional test in B/S is relatively simple, and the key is how to do a good job of performance testing. At present, most testers think that it is okay to run some testing tools to prove that my product can achieve performance. To prove that testing is of no value, the key is to discover product performance defects, locate problems, and solve problems. This is what testing is to do.
First, we will analyze how to perform Web Testing from two aspects. In terms of technical implementation, the general B/S structure, whether it is. net or J2EE, both of which are multi-layer architectures, including the interface layer, business logic layer, and data layer. In terms of the test process, the first is to discover, analyze, and locate the problem, and then the developer will solve the problem. How can we test the B/S structure?
How to find the problem is my first introduction. Before conducting a web test, you need some information, such as the product function manual and performance requirement manual, which may not be perfect, but must be available, it is basic knowledge to clarify the test objectives, but I often see that I have started the test, but I still don't know what performance indicators my system will achieve. Here is a brief introduction.Tested performance indicators:
1. General metrics (required tests for Web application servers and database servers ):
* Processortime: indicates the CPU usage of the server. Generally, when the average value reaches 70%, the service is close to saturation;
* Memory available Mbyte: the number of available memory. If the memory is changed during the test, you must note that the memory leakage is serious;
* Physicsdisk time: physical disk read/write time;
2. Web Server metrics:
* Avg rps: Average response times per second = total request time/second;
* AVG time to last byte per terstion (mstes): the average number of iterations of the service role per second. Some people will confuse the two;
* Successful rounds: Successful requests;
* Failed rounds: Failed requests;
* Successful hits: Number of successful clicks;
* Failed hits: the number of failed clicks;
* Hits per second: the number of clicks per second;
* Successful hits per second: the number of successful clicks per second;
* Failed hits per second: the number of failed clicks per second;
* Attempted connections: number of attempted connections;
3. Database Server indicators:
* User 0 connections: number of user connections, that is, the number of connections to the database;
* Number of deadlocks: Database deadlock;
* Butter cache hit: database cache hits;
The above indicators are only some common indicators, which play a leading role. You must make corresponding adjustments for different applications. For example, the program is used. NET technology, you must add some targeted testing indicators. For more information about these metrics, see systemmonitor and LoadRunner and act in windows. The indicator setting is very important for identifying problems. It will help you identify some qualitative errors. I will not conduct too much analysis on qualitative stress tests, and there are many tools. The popular tools include LoadRunner, act, was, and webload. Each of them has its own scope of use. I think:
LoadRunner is the most comprehensive. It provides support for multiple protocols and is competent for complex stress tests;
Was and Act provide better technical support to Microsoft, among which was supports distributed cluster testing;
Act is better integrated with. NET and supports viewstate (control cache in. Net) testing.
At this stage of testing, you need to constantly change the testing target of the Data coefficient. At the beginning, because the system is too large, we need to divide it into several subsystems, the performance objectives of each subsystem must be clearly defined. The main purpose is to set a threshold value for the concurrency indicator. At the same time, some system-related test parameters should be set for the application server and database server, analyze subsystems that fail to reach the threshold and have problems with some common parameters in depth. For example, its development does not meet your requirements. It proves that the performance of the subsystem is faulty, the database user connection is too high, and the program does not release the user connection. We need to perform a detailed test on the subsystem. Because the image request has a great impact on the performance under the B/S structure, we need to perform the test on the subsystem in two parts:
1. Non-procedural parts, that is, images and so on;
2. Application itself.
Separation of transactions or functionsYou can test the two parts separately. For specific practices, refer to the Manual of each tool. I will not describe them here. You have higher requirements on the testing parameters of the subsystem. It helps you locate problems precisely, such as exceptions, deadlocks, network traffic, and other situations that have not been noticed before; note thatThe collection of test parameters has a great impact on the system performance. Therefore, generally, no more than 10 parameters are required.. The overall performance test indicators just introduced should not be increased much, so the impact will be slightly reduced. At this stage, it should be noted that the data volume of the database will greatly affect the performance. Therefore, we should simulate the corresponding data volume in the database based on the previous performance requirement specification for testing, in this way, we can have a higher level of credibility.
The above is about the discovery of the problem. The following is the analysis of the cause of the problem. This step requires a lot of requirements and is generally done by testers and programmers. Of course, if you have considerable development experience, it is even harder to perform tests in this area. Next, let's talk about how to precisely locate the problem. There may be many possibilities of problems, which are roughly divided into the following types:
1. performance cannot reach the goal;
2. performance is achieved, but there are some other problems, such as exceptions and deadlocks. Low cache hit and high network traffic;
Iii. server stability problems, such as memory leakage .......
To discover these problems, you must have a satisfactory performance analysis and optimization tool, such as Microsoft's. net has its own development tools and similar tools for borland's Java development tools, but I personally think the better tools are purify and quan.pdf under rose, this is mainly because of his. net, Java, and C ++ are supported, and the analysis results are particularly professional. Let's take a look at rational purify first.
Rational purify can automatically identify memory-related errors in Visual C/C ++ and Java code to ensure the quality and reliability of the entire application. Find traditional memory access errors in typical visual c/C ++ programs and errors related to garbage collection in Java and C # code; rational quantity is a powerful tool for function-level performance analysis. You can use it to obtain the time, percentage, and number of function calls and the time occupied by subfunctions from a graphical interface, this allows you to quickly locate performance bottlenecks.
Let's talk about performance optimization and exception handling,There is a principle for Performance Optimization-that is, optimization with the largest time ratio is the most effective.For example, the execution time of a function is 30 seconds. If you optimize it by one hundred times, the execution time is 0.3 seconds, which is increased by 29.7 seconds. If the execution time is 0.3 seconds, after the optimization, It is 0.003 seconds, and the actual increase is 0.297 seconds. The improvement is not obvious and anyone who has written the program knows that the latter has a higher performance optimization cost.
In the process of performance optimization, it is generally the first database and then the program.Because database optimization does not need to modify the program, the risk of modification is very small. But how can we determine whether it is a database problem? This requires skill. When using quantity, you can analyze it all the way, and most of them will eventually find that the database query function takes a large amount of time, for example, sqlcmd. executenoquery and other databases to execute functions. In this case, you need to analyze the database.
The database analysis principle is to first index, then store the process, and finally optimize the table structure view.Index optimization is the simplest and most effective method. If you use it properly, it will bring unexpected results. Here I will give you a brief introduction to my favorites: sqlprofile and SQL query analyzer.
Precise sqlprofile is an SQL statement tracker that can track the SQL statements used in program flows and stored procedures. It can be used together with the query analyzer to analyze SQL statements and make good judgments on index optimization, however, indexes are not omnipotent. When adding, deleting, and modifying more tables, too many indexes will lead to performance degradation of these operations. Therefore, some experience is required to judge. At the same time, it is most effective to optimize the SQL statements with the highest usage frequency. At this time, I need precise, which can observe the execution of a certain SQL statement for a long time.
After the potential of database optimization is tapped, if the performance requirements are still not met or there are still problems, optimization should be performed from the program. This is what programmers do. What testers need to do is to tell them which function execution causes performance degradation, such as excessive exceptions, excessive loops, or too many DCOM calls, however, it is not easy to persuade programmers. If you want to do well at this stage, you must have several years of programming experience and make programmers feel that your performance will improve, this is not easy.
Memory analysis is generally a long-term analysis process. It is difficult to do so. First, we need to prepare for long-term competition. Second, the analysis of Memory leakage should be conducted in a unit test in sync, instead of waiting until the end to discover the problem, of course, the problem has to be solved. Generally, this type of problem is exposed only after the server has been running for a long time. Once the problem is found, you need to locate the problem, the analysis principle is that subsystems run independently of each other to find the system set with the minimum problem, or use memory analysis tools to observe the memory objects, locate the problem initially, and use purify for runtime analysis, generally, C ++ memory has many problems, such as Java and. net is relatively small, generally caused by unreasonable GC. C ++ has many memory errors. The following are common examples:
1. Array bounds read (ABR): array out-of-bounds read
2. Array Bounds Write (ABW): array out-of-bounds write
3. Beyond stack read (BSR): stack out-of-bounds read
4. free memory read (FMR): idle memory read
5. Invalid Pointer read (IPR): Invalid Pointer reading
6. NULL pointer read (NPR): NULL pointer reading
7. uninitialized memory read (UMR): Memory read/write is not initialized.
8. Memory Leak: Memory leakage
Note: For more information, see the help information of purify.
By the way, why?Better Memory analysis during unit testingBecause unit tests are designed for a single function, memory analysis based on unit test cases can quickly locate the problem. At the same time, due to early detection of the problem, the risks in the future will be reduced, of course, it would be perfect if we combined the code overwrite tool purecoverage.
Note: This article only describes the testing process of the B/S application. It only gives a rough introduction to the tools used in a certain stage, you can also use tools you are familiar with to achieve the same goal.
Performance management
Speaking of performance management in windows, many people may first think of the ubiquitous Performance Monitor Tool. As early as the Windows NT era, performance monitor is the main tool for obtaining performance information. Of course, the task manager and Windows Management specifications (Windows Management Instrumentation) are also common tools, they not only provide performance data, but also provide other performance-related management information. This article describes some techniques to fully tap the potential of these classic tools, introduces new tools for Windows XP, and discusses how to use them to evaluate the system performance.
1. What is performance management?
For many administrators, for Windows performance management, the "performance" program in the control panel → the "performance" program in the management tool, that is, the performance monitor program, then checks the CPU utilization, disk idle status, and memory pressure, in addition, it is usually only checked when a performance problem occurs. For example, the server response suddenly slows down or the user cannot access the server. This performance management method is completely an afterthought remedy and serves only the fire-fighting team. Due to the lack of detailed and clear pre-assessment and planning, it is not an excellent strategy. To achieve effective performance management, you must master the system performance before a problem occurs.
Only by taking effective performance management policies in advance can we fully grasp the performance characteristics of the system. On this basis, we can estimate when performance problems and specific performance problems may occur. The pre-collected performance data can also be used to plan future computing capability requirements. For example, if there is an IIS Web server, when the number of concurrent users is 200, the CPU utilization is 60%, it can be inferred when the system load reaches the limit, and the number of concurrent users that can be supported when the load reaches the limit. In addition, you can determine when to add hardware devices based on the website growth.
The overall performance of the system is determined by many factors, such as CPU utilization, CPU queue length (that is, how many tasks are waiting for the CPU service), and disk idle (that is, disk Drive time used to respond to requests), available physical memory, network interface utilization, etc. Table 1 summarizes the most common performance counters.
Table 1: important performance counters |
| Performance object |
Counter |
Provided information |
| Memory |
Available bytes |
Available bytes shows the total amount of physical memory currently idle. When the value is changed to an hour, Windows frequently calls disk page files. If the value is small, for example, less than 5 MB, the system will spend most of the time on the Operation page file. |
| Memory |
% Committed bytes in use |
% Committed bytes in use is the ratio between memory: committed bytes and memory: commit limit. (Committed memory refers to the physical memory in use that has reserved space in the paging file when writing data to the disk. Commit limit is determined by the size of the paging file. If the paging file is expanded, the proportion will be reduced ). This counter only displays the current percentage, rather than an average value. |
| Memory |
Page faults/sec |
Page faults/sec indicates the overall rate at which the processor processes error pages. The number of error pages per second is used. A page error occurs when the processor requests code or data that is not in its working set (space in physical memory. This counter includes hard errors (those requiring disk access) and soft errors (error pages found elsewhere in the physical memory ). Many processors can continue to operate in the case of a large number of soft errors. However, hard errors can cause significant delays. This counter shows the difference between the observed values in the two instances divided by the duration of the Instance interval. |
| Network Interface |
Bytes total/sec |
Bytes total/sec is the speed at which bytes are sent and received, including frame characters. |
| Network Interface |
Packets/sec |
Packets/sec is the rate at which packets are sent and received. |
| Physical Disk |
% Busy time |
% Busy time indicates the percentage of time the disk drive is busy providing services for read or write requests. |
| Physical Disk |
AVG. Disk Queue Length |
AVG. Disk queue length refers to the average number of Read and Write requests (queued for the selected disk in the instance interval. |
| Physical Disk |
Current disk Queue Length |
Current disk queue length refers to the number of incomplete requests on the disk when collecting operation data. It includes requests that are in service for the snapshot memory. This is an average value of real-time length rather than a certain interval. The multi-spindle disk device can have multiple requests at a time, but other requests that occur at the same time are waiting for service. This counter may reflect a temporary high or low queue length, but the value may always be high if there is continuous load on the disk drive. The request wait time is proportional to the length of the queue minus the spindle on the disk. This difference should be less than 2 to maintain good performance. |
| Processor |
% Processor time |
% Processor time indicates the percentage of time when the processor executes non-Idle threads. This counter is designed as the main indicator for processor activity. It measures the time that the processor uses to execute idle processing threads at each sample interval, and deducts this value by 100%. (Each processor has an idle thread, which consumes a period of time when no other thread can run ). The sample interval can be considered as the percentage of useful work. |
| Processor |
% USER time |
% USER time refers to the percentage of non-idle processor time used in user mode (user mode is a finite processing mode designed for applications, Environment subsystems, and integer subsystems. Another mode is the privileged mode, which is designed for operating system components and allows direct access to hardware and all memory. The operating system converts the application thread to the privileged mode to access the Operating System Service ). This Count value is displayed as part of the Instance time during average busy hours. |
| Server work queues |
Queue Length |
Queue Length refers to the length of the current server job queue of the CPU. If the queue length exceeds four for a long time, the processor may be congested. This value is an instant count, not an average value of a period of time. |
| System |
Processor Queue Length |
Processor queue length refers to the number of threads in the processing queue. Even a computer with multiple processors has a single queue for processing time. Unlike the disk counter, this counter only counts the ready threads without counting the running threads. If there are always more than two threads in the processor queue, the processor is blocked. This counter only displays the value observed last time, rather than an average value. |
| TCP |
Segments retransmitted/sec |
Segments retransmitted/sec indicates the rate at which the program segment is retransmitted. That is, the transfer segment contains one or more previously transmitted bytes. |
Ii. Custom Performance Monitor
In Windows 2 k/XP, Performance Monitor is still the most commonly used performance management tool. Of course, many new tools have been improved and many functions have been added. In win 2 K, Performance Monitor is implemented in the form of a Management Console (MMC) unit. Start the Performance Monitor of Windows 2 k/XP and you can see the interface similar to figure 1.
Figure 1
In Win XP, the performance monitor loads three Counters by default: Pages/sec, avg. Disk queue length, % processor time. These three counters cannot be deleted directly, but they are kept and reduce the startup speed of the monitor. If you want the monitor to start without loading any counters. (To prevent the monitor from loading any counters at startup) first clear perfmon in the \ % SystemRoot % \ system32 directory. read-only attribute of the MSC file: Enter the command window, go to the System32 directory, and run attrib R perfmon. MSC. Then restart the counter, select a counter, and click the black "X" button on the toolbar to delete a counter. Choose "file"> "save" from the menu to save the changed console to the disk. To mark the management console as read-only, you only need to execute attrib + R perfmon. MSC on the command line.
In NT 4.0, the Performance Monitor contains a real-time chart and the logging and alarm functions, but in Windows 2 K and XP, these functions are separated. In win 2 k/XP, the real-time performance chart is changed to "system monitor", under which there is a log and alarm tool. The system monitor can be regarded as a pure real-time performance data viewing tool. It can only be viewed and cannot be saved. You can click the "+" button on the toolbar to add a new performance counter. Performance Logs and alarm tools provide the ability to operate historical data.
To create a management console that only has system monitor, without Performance Logs and alarm tools, follow these steps: run the "MMC" command to open a blank MMC window, select "file"> "Add/delete Management Unit", click "add", select "ActiveX control", click "add", and select system monitor control in the Wizard, OK.
3. Performance Logs and alarm tools
The system monitor can only simply view real-time performance data. If you need long-term and persistent performance data, you must use Performance Logs and alarm tools. The performance logging tool can centrally record performance data from multiple local or remote systems in one log file, which can be viewed by the system monitor or processed by other tools. The "Performance Logs and alarms" node in the extended console shows three branches: Counter logs, tracking logs, and alarms. The function of the alarm tool is very simple, that is, when the performance data of a counter reaches the specified value, execute certain actions, such as sending an email or sending a message using the net send command. Next we will mainly discuss the log tools.
To illustrate how to use counter logs, we need to create a log session. Right-click the "counter log" node, select "New Log Settings", specify the name of the log settings, and click "OK". The dialog box shown in Figure 2 appears, set the counter to be recorded in the log.
Figure 2
The default log file storage path is the c: \ perflogs Directory, which can be modified on the "Log File" page in the dialog box, however, you must first set the objects and counters to be recorded before you can go to the "Log File" page. Click "add object" to add all counters of a monitored object to the log record, or click "add counter" to add a single counter. No matter which option is selected, the monitoring target (object or counter) can be local or remote.
If you want to collect performance data for a long time, it is best to adjust the sampling interval-especially, if there are many counters to be monitored and different machines come from, if the sampling interval is too small, log Files will soon become very large. From sampling once every 15 minutes, test run for a period of time to see how big the log file is, and then adjust it accordingly.
To connect to a remote machine, you can enter the username used to log on to the remote machine in the "running mode" text box.
Certain permissions are required to collect performance monitoring data. The HKEY_LOCAL_MACHINE \ SOFTWARE \ Microsoft \ Windows NT \ CurrentVersion \ Perflib registration subkeys control access to performance monitoring data, performance monitoring data is a tool such as the register sub-key flow to the system monitor. Right-click the registration subkey and select "permission" from the menu. 3. Adjust the permission allocation here to adjust the user with the right to access performance data.
Figure 3
After setting the objects and counters to be monitored, you can adjust the log file format on the "Log Files" page, and set the start time and stop time on the "Plan" page. If it is set to manual start, the start mode is to right-click the log session in the MMC window and select "start ".
On the "Log Files" Page, 4. Performance Counter logs except the default binary. in addition to the BLG format, the BLG format can also be saved as other formats, such as text files separated by commas ), it can even be saved to the SQL Server table, which is only available in Windows Server 2003 and XP. If you select SQL Server, You need to specify a data source of SQL Server and a table that saves the data. This is a bit of a hassle, but if you want to collect a large amount of performance data for analysis, SQL Server is a good choice.
Figure 4
After the log is started, a 65 KB log file is generated in the log directory. A new log file is generated each time the log is closed or restarted, and the sequence number in the file name is added in sequence. Win XP and 2 k support a very practical feature that can continue to write performance data into logs even if the performance monitor tool is not running. For NT 4.0, you must install the datalog service of NT 4.0 Resource Kit to use this unattended logging function. Windows 2 K and XP have the Performance Logs and alerts services. This service is automatically started when the log session starts and stops when the log session ends.
Another improvement of Windows 2 k/XP Performance Logs is the log session saving function. In NT 4.0, to re-use a set of selected performance objects and counters for a log session, you must re-create a workbench file for each server you want to monitor, if you want to run a set of log configurations locally on multiple servers, you must repeat the tedious configuration operations. However, in win 2 k/XP, all configuration files (including logs and alarms) are saved as HTML files, which are easy to modify and reuse. To save a log session configuration, right-click the session in the MMC window and select "Save settings.
Iv. Command Line tools
XP provides a new tool called logman, which not only starts and stops log sessions on the command line, but also creates new log sessions from the command line.
For example, the following first logman command creates an iislogging session and stores the log files in the default C: \ perflogs \ iislogging.blg. the log adds the Process \ % processor time counter of the inetinfo.exe process on the local server, the interval between collecting performance data is 15 minutes. The second command starts the iislogging log session. After running these commands, if you start the MMC performance monitoring tool, you can see that the log session created just now has been listed in the counter log.
Logman create counter iislogging-c "\ process (inetinfo) \ % processor time"-SI 15: 00 logman start iislogging
You can use the-s option of logman to start logs on a remote machine, for example, "-S \ servername ". When this option is used, logs started by logman run on the remote server, instead of the local machine that executes the logman command.
V. WMIC
WMIC is a Windows Management Standard command line tool. Run WMIC on the command to start the WMIC environment. When running the WMIC command for the first time, WMIC first installs itself into the WMI namespace. Although Microsoft launched WMIC to simplify the use of WMI, the command line Syntax of WMIC is still complex. WMIC uses aliases to describe frequently accessed WMI classes. For example, the WMIC alias pagefile is equivalent to the WMI query select * From win32_pagefileusage. If you enter the pagefile command at the WMIC prompt, WMIC displays the current page file usage.
Unfortunately, WMIC does not provide an alias for WMI-based performance monitoring data. Therefore, to query performance monitor data, you must directly call the corresponding WMI class. For example, to view the current physical memory usage of the server, you can run path win32_perfformatteddata_perfos_memory on the wmic command line. This Command requires WMIC to return the data of WMI win32_perfformatteddata_perfos_memory, the output result contains every attribute of the specified WMI memory object. If you only want to view the available bytes attribute, you can change the command to path win32_perfformatteddata_perfos_memory get availablebytes.
In addition, WMIC commands can be directly executed from Windows command lines. For example, to view available bytes, you can run the WMIC path win32_perfformatteddata_perfos_memory get availablebytes command. If you want to return multiple attribute values, all attributes are listed in sequence and separated by commas (,). For example, "availablebytes, availablembytes, and cachebytes ".
So how can we get the WMI class names of various performance monitor counters? The best way is to use WMI tools. WMI tools is a free tool provided by Microsoft that can be accessed from http://www.microsoft.com/downloads/details.aspx? Displaylang = en & familyid = 6430f853-1120-48db-8cc5-f2abdc3ed314 download, as shown in Figure 5. It clearly shows the names, attributes, and methods of each class.
Figure 5
In a multi-tier application environment, if you want to find the specific location of the performance bottleneck, performance monitoring data is undoubtedly an extremely valuable basis, as long as you make full use of the performance tools provided by win 2 k/XP, we can construct a function-rich performance management system.
Use Microsoft Web application stress tool for web stress testing
Web stress testing is a popular topic. Using web stress testing can effectively test the running status and response time of some Web servers, it is a good way to test the web server's endurance. Web stress testing usually uses some tools, such as Microsoft's web application stress, siege in Linux, and comprehensive web-CT. These are all excellent web stress testing tools.
Although these tools make it easy for us to test the server's affordability, their harm is even more astonishing, even a comprehensive test tool can be used to launch catastrophic denial-of-service attacks on a small Web server. Next, I will show you how to use Microsoft's web application stress for a web stress test. The purpose of this test is to let everyone see its great harm.
I. Brief Introduction to tools
Microsoft Web application stress tool is a set of tools developed by Microsoft website testers to perform actual website stress testing. With this powerful stress testing tool, you can use a small number of client computers to simulate the potential impact of a large number of users going online on website services, before the website is launched, perform a test on the website you designed in the real environment to identify potential system problems and further adjust and set up the system. These features enable the d. o.s bombing function.
Tip: D. o.s blocks your service by crashing your service computer or pressing it across. To put it simply, it is to make your computer provide more services, so that your computer can be stuck on the verge of crash or crash.
Ii. Simple tool settings
Open the web application stress tool, a simple page (1). the top is the toolbar, the lower left is the function option, and the lower right is the detailed setting option. Before stress testing the target Web server, make necessary settings.
Figure 1
1. in the "Settings" function settings (2), one is the stress level (threads), which specifies the number of threads used by the program in the background for requests, that is, it is equivalent to simulating the number of client connections. What is more vivid is to set the number of bombing threads. Generally, enter 500 ~ 1000, because the number of threads is set based on the local capacity.ConfigurationIf you have enough confidence, the higher the setting, the better the bombing effect.
Figure 2
2. In "test run time", specify the duration of a stress test, which can be divided into days, hours, minutes, And seconds. Set the duration based on the actual situation!
3. The rest of the options are not very important. Here we will not waste any effort. You can try setting them yourself.
Iii. Stress Testing
After the introduction of the tool, we will prepare the following conditions: here we will discuss with a friend about the test, he is a single machine online, the machine configuration is CPU: athlon xp2500 +, memory 512 MB, hard disk 80 GB, etc, the machine configuration is not bad. He installed IIS on the machine and set up an external web server. The program in the Web Service is the Mobile Network 7.0. I used the stress testing tool to test this server.
Step 1: Right-click the tool and select Add to add a new test project: new script, in the master option, enter the IP address of the server to be tested. Select the Web connection method to be tested at the bottom. Select "get" as the mode verb and "path" as the path of the web page to be tested. Enter "/index. asp" as the homepage file (3) of the dynamic network ).
Figure 3
Step 2: set the number of stress level (threads) threads to 1000 in the "Settings" function settings. After that, click the gray triangle button in the tool to perform the test (4 ). The test is complete. Wait for a friend to complete the task.ManagementAnd connection view!
Figure 4
After the attack starts, you can see from the task manager that the CPU usage has reached 100%, and the loss rate has reached the maximum (5 ). Run the netstat-An command in the CMD window. You can see that my IP address is connected to port 80 on a friend's server (6 ). In addition, its web site cannot be opened, prompting too many users to connect, achieving the same purpose as the D. o.s attack.
Figure 5
Figure 6
Imagine that if multiple bots are used to perform a web stress test on a server, it would be a disaster tolerance for this server. Therefore, friends must consider it carefully before using it.
V