Distributed File System test methods and tools

Source: Internet
Author: User

Unstructured data, big data, and cloud storage have undoubtedly become the development trend and hot spot of Information Technology. Distributed File systems have been pushed to the forefront as the core foundation, and are widely pushed by industry and academia. Modern distributed file systems are generally characterized by high performance, high scalability, high availability, high efficiency, ease of use, and ease of management. The complexity of architecture design makes system testing very complex. From commercial products isilon, ibrix, sonas, filestore, netapp GX, panasas, stornext, bwfs, loongestor, to open-source systems lustre, glusterfs, and moosefs, how can we test and evaluate these distributed file systems and select the product systems most suitable for data applications? In terms of Function Testing and non-functional testing, this section briefly introduces the testing methods for Distributed File Systems and briefly describes the main testing tools to provide a basis for product selection or product R & D.

Distributed File System Test Method
(1) Functional testing (manual + automated)
File System functions mainly involve POSIX APIs implemented by the system, including File Read and access control, metadata operations, lock operations, and other functions and APIs. The POSIX semantics of the file system is different, and the file system APIs are also different. Functional testing should cover the APIs and functions designed and implemented by the file system. The workload for functional testing is high. We should focus on the application of automated testing methods. Combined with ADHOC manual testing, the automated testing tool can use LTP, fstest, and locktests.

(2) non-functional testing
(2.1) Data Consistency Test (manual + automated)

Data Consistency means that the data in the file system is consistent with the data before being written from the outside, that is, the data written and read are always consistent. Data Consistency indicates that the file system can ensure data integrity without causing data loss or data errors. This is the most basic function of the file system. The diff and md5sum scripts can be used for automated testing. Ltp also provides a data consistency testing tool. In addition, we can also perform a manual adhoc test, such as compiling the software source code and Linux kernel to verify data integrity.

(2.2) POSIX semantic compatibility test (Automated)
POSIX (Portable Operating System Interface), indicating portable operating system interfaces, developed by IEEE and standardized by ANSI and ISO. POSIX is designed to improve the portability of applications across various operating systems. POSIX-compliant applications can run on any POSIX-compliant OS after being re-compiled. POSIX is essentially an interface. Linux complies with POSIX standards, and VFS must also comply with POSIX standards. Therefore, as long as the file system meets the VFS, it can be said that it complies with the POSIX standard and has good portability, versatility and interoperability. File System POSIX compatibility test adopts
The LTP (Linux test project) and PCTs (POSIX complicance testing suite) automated tests support the standard tests of linux90, linux96, and unix98 POSIX.

(2.3) deployment method test (manual)
Currently, distributed files generally have the scale-out feature and can be used to build large-scale and High-Performance File System clusters. For different applications and solutions, the file system deployment methods are significantly different. To test the deployment mode, you need to test the system deployment modes in different scenarios, including automatic installation and configuration, cluster Scale, hardware configuration (server, storage, and network), automatic load balancing, and high-availability HA. This part of the test is unlikely to be automated. You need to design the solution and specific deployment based on the Application Scenario, and then manually perform the test.

(2.4) availability test (manual)

High Availability is already an indispensable feature of distributed file systems to ensure business continuity of data applications. The availability of the distributed file system mainly includes the metadata service (MDS) and data. The high availability of the metadata service (MDS) usually uses the Failover mechanism or the MDS cluster, data availability mainly includes replication, self-heal, network cluster raid, and Erasure code. The high availability of the file system is critical to many applications and must be strictly tested and verified. These tests are carried out manually.

(2.5) scalability test (manual)
The authoritative definition of cloud computing provided by NIST: On-Demand self-service, extensive network access, resource pools, fast elasticity, and measurable services. Cloud storage is a form of cloud computing, and distributed file systems are the foundation of cloud storage. Therefore, elastic scalability is especially important for file systems in the cloud computing era. The file system scalability test mainly includes testing the system's elastic Scalability (expansion and contraction), as well as the performance impact of the expansion system, and verifying whether the system has linear scalability. This part of the test is also carried out manually.

(2.6) Stability Test (Automated)

Once a distributed file system is launched, it usually runs continuously for a long time. The importance of stability is self-evident. The stability test mainly verifies whether the system can still run normally and its functions are normal for a long period of time (7/30/180/365x24. The stability test is usually conducted in an automated manner. You can use tools such as LTP, Iozone, postmark, and FIO to load the test system, and use functional testing methods to verify the correctness of the function.

(2.7) stress testing (Automated)
The load capacity of the Distributed File System is always limited. When the system is overloaded, the system may suffer performance degradation, function exceptions, access rejection, and other problems. Stress testing is to verify that the system is under high pressure, including multiple data clients, high ops pressure, and high iops/throughput pressure, whether the system can still run normally, whether the functions are normal, and the consumption of system resources, thus providing a basis for production and operation. Stress Testing is performed in an automated manner. Ltp, Iozone, postmark, and FIO are used to continuously increase the pressure on the system. Functional testing methods are also used to verify functional correctness, and top, iostat, and SAR are used, ganglia and other tools to monitor system resources.

(2.8) Performance Testing (Automated)
Performance is the most critical dimension for evaluating a distributed file system. Based on the performance of the file system in different scenarios, you can determine whether the file system is suitable for specific application scenarios, it also provides a basis for system performance optimization. File System Performance mainly includes three indicators: iops, OPS, and throughput, which indicate the processing capabilities of small files, metadata, and big data. Performance testing is automated to test the system performance under different loads, it mainly includes ops, iops, and throughput for applications such as small files, large files, massive directories, email server, fileserver, videoserver, and webserver, tools that generate Io load can use Iozone, postmark, FIO, and filepath.

Introduction to file system testing tools
LTP (http://ltp.sourceforge.net /)
LTP (Linux test project) is a project jointly launched by SGI and IBM. It provides a set of test suites to verify the reliability, robustness, and stability of the Linux system, it can also be used for POSIX compatibility testing and functional testing. LTP provides over 2000 testing tools that can be customized as needed. At the same time, LTP is also an excellent automated testing framework. Based on it, you can design test cases and test tools to achieve more functional testing automation.

Fstest (http://www.tuxera.com/community/posix-test-suite)
Fstest is a simplified File System POSIX compatibility test suite that works on FreeBSD, Solaris, Linux to test file systems such as UFS, ZFS, ext3, XFS and the NTFS-3G. Fstest currently has 3601 regression test cases. The system calls for the test cover chmod, chown, Link, mkdir, mkfifo, open, rename, rmdir, symlink, truncate, and Unlink.

(3) locktests (http://nfsv4.bullopensource.org/tools/tests/locktest.php)
Locktest is used to test the pressure of the fcntl lock function. During running, the main process first sets the record lock in the byte range in the specified file area, and then multiple slave processes attempt to execute read, write, and new lock operations in the file area. The results of these operations are predictable (the matrix is as follows). If the operation results are the same as expected, the test passes; otherwise, the test fails.

 Slave Type Test OperationMaster advisory locking mandatory locking
Read lock write lock read lock write lock
ThreadSet a read lock allowed

Set a write lock allowed

Read allowed

Write allowed

ProcessSet a read lock allowed denied

Set a write lock denied
Read allowed denied allowed

Write allowed denied

(4) http://www.opengroup.org/testing/linux-test/lsb-vsx.html (PCTs)
PCTs (POSIX complicance testing suite), POSIX consistency test suite, is based on POSIX standards, through rigorous, quantitative testing, to verify, evaluate, and authenticate testing software for operating systems that comply with POSIX standards. IEEE std2003.1 is the design standard of PCTs. Common PCTs mainly include vsx-PCTs, NIST-PCTs and OPTS-PCTS. The above connection is vsx-PCTs.

Iozone (http://www.iozone.org)
Iozone is a widely used file system testing standard tool, which can generate and measure various operational performance, including read, write, re-read, re-write, read backwards, read strided, fread, fwrite, random read, pread, MMAP, aio_read, aio_write, and other operations. Iozone has been transplanted to various architecture computers and operating systems and is widely used as a standard tool for testing, analyzing, and evaluating file system performance.

(6) Postmark (http://www.gtlib.cc.gatech.edu/pub/debian/pool/main/p/postmark)
Postmark is developed by netapp, a famous NAS provider, to test the backend storage performance of its products. Postmark is used to test the performance of a file system in a mail or e-commerce system. This type of application features frequent and massive access to small files. The test principle of postmark is to create a test file pool. The maximum and minimum file length can be set, and the total data volume is certain. After the file pool is created, PostMark performs a series of transaction operations on the file pool. Based on the statistics from the actual application, set each transaction to include one Creation or Deletion operation and one read or addition operation. In some cases, the Cache Policy of the file system may affect the performance,
Postmark can offset this impact by modifying the ratio of creation, deletion, and read/Add operations. After the transaction operation is complete, the post operation deletes the file pool, ends the test, and outputs the result. Postmark uses a random number to generate the serial number of the operated file, so that the test is closer to the practical application. Important output data in the output results include the total test time, the average number of transactions completed per second, and the average number of files created and deleted per second in transaction processing, and the average transmission speed of reading and writing.

FIO (http://freshmeat.net/projects/fio)
FIO is an I/O standard test and hardware stress verification tool that supports 13 different types of I/O engines (sync, MMAP, libaio, posixaio, SG V3, splice, null, network, syslet, guasi, solarisaio, etc.), I/O priorities (for newer linux kernels), rate I/O, forked or threaded jobs, and so on. FIO supports testing block devices and file systems. It is widely used in standard testing, QA, and verification testing. It supports Linux, FreeBSD,
Operating systems such as NetBSD, OS X, opensolaris, Aix, HP-UX, and windows.

(8) filepath (http://filebench.sourceforge.net /)
Filesystem is an automated testing tool for file system performance. It quickly simulates the loads of real application servers to test the file system performance. It can not only simulate file system micro-operations (such as copyfiles, createfiles, randomread, randomwrite), but also simulate complex applications (such as varmail, fileserver, OLTP, DSS, Webserver, WebProxy ). Filepath is suitable for testing file server performance, but it is also an automatic load generation tool and can also be used for file system performance.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.