Source: http://ziqi.512j.com/
In addition to the quantitative performance indicators, the server comparison test we conducted also takes into account the configuration indicators and prices of extensibility, availability, manageability, and other functions, to fully consider the overall performance of the server. We have used a new test method for performance testing, which includes file testing, database performance testing, and Web performance testing. Among them, the file performance and database performance adopt the benchmark factory load test and capacity planning software of Quest software company (www.quest.com), and the Web performance test uses the caw webavalanche tester provided by spirent.
Performance Testing
File Performance Test Method
The benchmark factory software can customize transactions based on key indicators of file read/write. The software supports up to 1000 virtual customers. The test environment includes multiple clients with the same configuration. They are used to simulate virtual users. A single console has a gigabit switch, and the client and the console are connected to the switch through a m Nic, the tested server is connected to the vswitch through a Gigabit Optical Fiber Nic.
The tested server is installed with Windows 2000 Advanced Server with SP2, And the raid level is 5 in all three performance tests.
In terms of specific test scheme settings, the test software sets the key factors that determine file read/write operations to read/write, random/sequence, Operation block size, and object size. In this test, considering that we have a separate database and web test project, in the file test, we set the target to test the Basic I/O performance of the server, this is mainly determined by the network interface, system bandwidth, disk subsystem, and so on. At the same time, the function of reading and writing large object files in large operation blocks and small operation blocks to read and write small object files can reflect the most basic I/O performance of the server, that is to say, the "large block reads and writes large files" Inspection of system bandwidth and cache, and the "small block reads and writes small files" Inspection of disk subsystems and network interfaces. The final four transactions are as follows:
● Sequential reading and writing of large files (8 kb for the operation block, 80% 500kb for the object file, and 20% 1 MB for the object file)
● Random reading and writing of large files (8 kb for the operation block, 80% 500kb for the object file, and 20% 1 MB for the object file)
● Random reading of small files (1 kb for the operation block, 80% 1kb for the object file, 10% 10kb for the object file, and 10% 50kb for the object file)
● Sequential write of small files (the operation block is 1 kb, the object file is 80% 1kb, 10% 10kb, and 10% 50kb)
The number of users of each transaction increases gradually with a fixed step, and the maximum number can be increased to 1000 virtual users. The number of users of "large file sequential read/write" transactions increases from 1 to 400 (test Xeon servers) or 200 (test tualatin servers) at a 40 step ), other transactions increase the number of users from 1 to 100 at the step of 1000. We expect the performance of the server to be tested when the number of users is different. The overall trend and peak value reflect the performance of the server. Each transaction runs three times, and the tested server restarts each time. The final result is the three average values.
Database Performance Test Method
SQL Server 2000 is installed on the server. If the server to be tested is a dual-channel tualatin server, the Chinese Standard Edition is used. If the server to be tested is a Xeon server, the Enterprise Edition is used. First, create a new database on the tested server, and use the database spec project predefined by benchmark factory to create tables in the database and load data. Create a storage process based on CPU computing on the server. Use 29 clients to simulate users and increase the number of users to 400 according to the step of 40 virtual users. The result is the number of transactions per second (TPS) measured by the server's database transaction processing capability. The entire test is divided into three times. Restart the tested server between each time, and the average value of the three times is taken as the evaluation result.
Web Performance Testing
The Web performance testing tool is caw webavalanche provided by spirent. Webavalanche simulates the actual user to send an HTTP request and provides detailed test results based on the response. It has the following features: it can simulate hundreds of thousands of clients to send requests to the server; it can simulate real network application conditions, such as the dynamic maintenance of website access during peak hours, with the addition of new clients and the departure of original customers, the access volume is not fixed. 20000 connections/second requests can be generated, which is sufficient to meet the test requirements. A wide range of test projects, there are successful request failures, URL and page response times, network traffic, and HTTP and TCP Protocols.
During the test, both the server and webavalanche (software version: 3.1.1.1) are equipped with Gigabit Optical Fiber NICs, which are directly connected through optical fiber. Windows 2000 Server with SP2 is installed on the monitoring end, which is directly connected to webavalanche through a crossover line. On the monitoring end, configure webavalanche through a Web browser, Install SQL Server 2000 on the tested server, and use Microsoft's IIS to create a web server.
The test is divided into two parts: Static Performance and dynamic performance. This is mainly because some websites have a majority of static content in actual Web applications, and most of the services they provide are static. Therefore, they are particularly concerned with server static performance. Similarly, most websites provide interactive services, so they are more concerned with the dynamic performance of servers.
According to the actual website, the proportion of the page size and static and dynamic pages in the tested website is obtained. The static and dynamic pages of the entire website account for 70% and 30%, and ASP is used as the dynamic page type. The file size distribution ratio of the Request page sample is the same as that of the entire website.
Static Performance Testing simulates static page requests. When dynamic performance is tested, 20% of dynamic page access requests and 80% of the remaining are static page requests. We have established a Server Page request model based on the actual web server's daily running situation. This model consists of four stages. The first stage is the push phase, the number of requests sent by webavalanche has gradually increased from 0 to 200. The second stage is the gradual pressurization stage, and the Request volume is gradually accumulated to the maximum value of 8200. The third stage is the dynamic maintenance stage, and the fourth stage is the descent stage, the number of requests rapidly decreases from the maximum to 0. The maximum number of requests is slightly greater than the transaction processing capacity that the actual server can provide.
The static and dynamic tests of the tested servers are tested three times, and the tested servers and testers restart each time. The average value of the three tests is obtained.
Function Testing
In terms of function testing, our test engineers comprehensively evaluated the scalability, availability, and manageability of the tested servers, including the scalability of hard disks, PCI slots, and memory, availability includes support for hot swapping, redundant devices (such as hard disks, power supplies, fans, and NICS), and manageability refers to the management software random on the server.