In some cases, the hbase data has to be violently deleted, such as when the raw data volume is particularly large and does not need to be stored. Or HBase does not start up and so on.
Delete is simple, directly call Hadoop fs-rm-r/hbase Such a command can be implemented on HDFs stored on the hbase raw files to delete. (Of course, a detailed data sheet can be deleted.)
However, after the deletion, after restarting HBase, when the data table was created, the problem of table already exist was found.
Only to remember, this problem appears certainly is zookeeper still exist this information.
So with zkcli.sh login zookeeper, input command rmr/hbase, and then restart HBase, sure enough to build the table passed smoothly.
Touched the recent days of HBase, found its very cumbersome, need to be tightly bound with the zookeeper, of course, in order to recover in some cases, it also appears very important.