For ease of development and commissioning, an Ubuntu 12.04 virtual machine was installed with VirtualBox to run hive and Hadoop inside.
A problem was found during use, running some query in hive, and after a while, the virtual disk space grew rapidly, reaching dozens of G. The virtual disk itself is configured for dynamic growth mode, but the physical disk space of the development system itself is limited, and the last virtual disk does not grow space resulting in insufficient virtual disk space to continue running query in hive.
One solution is to delete the directory where the HDFs data resides, use the Zerofree tool to zero free blocks from the virtual disk, and then compress the virtual disk with the VirtualBox vboxmanage tool to save space. Refer to this article for specific operations: http://dantwining.co.uk/2011/07/18/how-to-shrink-a-dynamically-expanding-guest-virtualbox-image/. Then go to the virtual machine and reformat HDFs. But the scheme is time-consuming and somewhat cumbersome.
The scenario is to create a new virtual disk that specializes in HDFS data. Add this disk to the second hard disk of the virtual machine. Use the GParted tool to partition the disk and format it as a EXT4 file system. Copy this disk to a spare. Modify/etc/fstab to mount this disk in a specified directory, such as ~/hdfsdata. Set the Hadoop.tmp.dir property in Hadoop core-site.xml to ~/hdfsdata. Modify the access permissions for the directory (chmod a+rwx ~/hdfsdata), otherwise format Namenode will fail. Format HDFs, run Hive and Hadoop. After a while, if this disk space grows too large, you can use the backup disk to overwrite the disk and reformat HDFs to continue.