Just testing the Hadoop program, accidentally deleted the data ~ ~ ~ ~ well is on the test machine, or the egg pain, or set up a Hadoop Recycle Bin, just in case
First of all:
The Hadoop Recycle Bin is trash and is closed by default.
Accustomed to the window of the classmate, suggest that it is best to open it ahead of time, or wrong operation, it will cry without tears
1. Modify Conf/core-site.xml to increase
XML code
XML code <property> <name>fs.trash.interval</name> <value>1440</value> < Description>number of minutes between trash checkpoints. If Zero, the Trash feature is disabled. </description> </property>
The default is 0. per minute. Here I set the 1 days (60*24)
When data rm is deleted, the data is move to the current folder. Trash Directory
2. Test
1) New Directory input
Java Code hadoop/bin/hadoop fs-mkdir input
3) Delete directory input
Java code [root@master data]# Hadoop FS-RMR input moved to Trash:hdfs://master:9000/user/root/input
4) Refer to the current directory
Java code [root@master data]# Hadoop fs-ls Found 2 items drwxr-xr-x-root supergroup 0 2011-02-12 22:1 7/user/root/. Trash
Found the input deletion, a more than a directory. Trash
5 Restore the directory you just deleted
Java code [root@master data]# Hadoop fs-mv/user/root/. Trash/current/user/root/input/user/root/input
6) Check the recovered data
Java code
Java code [root@master data]# Hadoop fs-ls input Found 2 items-rw-r--r--3 root supergroup 22 2011-02-1 2 17:40/user/root/input/file01-rw-r--r--3 root supergroup 2011-02-12 17:40/USER/ROOT/INPUT/FILE02
6) Delete. Trash directory (garbage removal)
Java code [root@master data]# Hadoop FS-RMR. Trash Deleted hdfs://master:9000/user/root/. Trash
Recycling of space
File deletion and recovery-the user or application deletes a file and the file is not immediately removed from the HDFs. Instead, HDFs renames the file and transfers it to the/trash directory. When the file is still in the/trash directory, the file can be recovered quickly. The time that the file is saved in/trash is configurable, and when this time is exceeded, Namenode deletes the file from the namespace. Deletion of the file, the data block associated with the file will also be freed. Note that there is a wait time delay between the file being deleted by the user and the increase in HDFs free space.
When the deleted file remains in the/trash directory, if the user wants to recover the file, they can retrieve the browse/trash directory and retrieve the file. The/trash directory only saves the most recent copy of the deleted file. The/trash directory is no different from other file directories except one: HDFs has applied a special policy on this directory to automatically delete files, and the current default policy is to delete files that are retained for more than 6 hours, a policy that will later be defined as a configurable interface.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.