For maven projects, the default integration test is performed as a phase of the build cycle, which is convenient for general projects to perform integration testing, but for Hadoop (or HBase) projects are not suitable because their applications run in the Cluster Environment and the development environment may be windows rather than linux, these reasons make it inconvenient to use the mvn command in the local development environment for integration test. Of course, you can also check out the code in the cluster test environment for integration test, in addition to creating a development environment on the test cluster, such as installing the build tool and configuration management tool, this also produces a lot of trivial check-in during the development and testing phase.
In my personal opinion, one of the preferred methods is to compress the test code into a jar package, upload it to the target cluster, and start the test using the command line (testng is recommended for integration testing ), use a bat script to integrate these actions. This can be completed with one click on the development end, which is very convenient to use. In fact, this operation method is very applicable and common in the hadoop "linux-based" cluster "environment, not just integration testing, including project deployment, services can be started in this way. In practice, the development efficiency can be greatly improved and the effect is good.
Build a Hadoop environment on Ubuntu 13.04
Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1
Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)
Configuration of Hadoop environment in Ubuntu
Detailed tutorial on creating a Hadoop environment for standalone Edition
Build a Hadoop environment (using virtual machines to build two Ubuntu systems in a Winodws environment)