The export/import mechanism provided by HBase can implement the Backup Restore function. In addition, Incremental Backup can be implemented. The following python script sets up Incremental backup. In this script, Incremental backup is performed every day, and full backup is performed every 15 days.
- import time
- import datetime
- from datetime import date
- import sys
- import os
-
- tablename=sys.argv[1]
- backupDst=sys.argv[2]
- today=date.today()
- if today.day == 15: //every month, we do a full backup
- backupSubFolder=backupDst+today.isoformat()+"-full"
- cmd="hbase org.apache.hadoop.hbase.mapreduce.Export %s %s"%(tablename,backupSubFolder)
- else:
-
- yesterday=datetime.date.today()- datetime.timedelta(days=1)
- todayTimeStamp=time.mktime(today.timetuple())
- yesTimeStamp=time.mktime(yesterday.timetuple())
- backupSubFolder=backupDst+today.isoformat()
- cmd="hbase org.apache.hadoop.hbase.mapreduce.Export %s %s %s"%(tablename,backupSubFolder,str(int(todayTimeStamp)*1000)
-
- print cmd
-
- os.system(cmd)
The Restore mechanism is simpler.
- hbase org.apache.hadoop.hbase.mapreduce.Import tablename restorefolder
Note that the original table must be created during Restore. Therefore, if the table itself is damaged, you need to create a new empty table and Restore it again.
Also, the path of hbase and Zookeeper must be configured in the Hadoop-env.sh with such a statement
- export HADOOP_CLASSPATH="/usr/lib/hadoop-hbase/hbaseXXX.jar:/usr/lib/hadoop-hbase/lib/zookeeperXXX.jar:/etc/hadoop-hbase/conf"