Hadoop File Corruption Solution
Today, I resized the cluster and re-installed the system on the previous two computers. As a result, I started Hadoop and found an error.
Cause: the replica book configured in hdfs-site is 1, and the files of the two machines are cleared, resulting in some data loss and recovery failure, an error is reported, causing hbase to be unable to access port 60010.
Solution: Use hadoop fsck/to list corrupted files. The damaged files cannot be recovered and can only be deleted. hadoop fsck-delete
-------------------------------------- Split line --------------------------------------
Tutorial on standalone/pseudo-distributed installation and configuration of Hadoop2.4.1 under Ubuntu14.04
Install and configure Hadoop2.2.0 on CentOS
Build a Hadoop environment on Ubuntu 13.04
Cluster configuration for Ubuntu 12.10 + Hadoop 1.2.1
Build a Hadoop environment on Ubuntu (standalone mode + pseudo Distribution Mode)
Configuration of Hadoop environment in Ubuntu
Detailed tutorial on creating a Hadoop environment for standalone Edition