Hadoop HDFS and hbase upgrade notes

Source: Internet
Author: User

Problem description: Because hadoop0.0000203 is used before, this version does not support append, resulting in data loss during hbase downtime. Data population is laborious and thankless, and HDFS is simply upgraded, by the way, hbase is also upgraded.

Note: Only the upgrade on one machine is demonstrated here. Other machines in the cluster can use the cluster normally after the upgrade is completed.

1. hadoop upgrade steps:

(1) Stop all Mr tasks on the cluster, including hbase (if hbase is in use, stop it first, followed by zookeeper)

(2) Stop DFS (Steps 1 and 2 can also be closed directly using the stop-all.sh script after hbase and zookeeper are closed)

(3) Delete temporary data, that is, files in the directory of the value of hadoop. tmp. dir set in the core-site.xml

(4) Back up HDFS metadata (just in case, this file in the DFS. Name. dir directory in the hdfs-site.xml)

(58.decompress the downloaded hadoop-1.0.3.tar.gz to the directory to be installed. Here I am/home/jxw, rename it hadoop, and configure the corresponding files under hadoop/Conf

(6) Configure environment variables, such as hadoop_home and hadoop_home_conf (if the hadoop installation directory you used for the upgrade is inconsistent with the original one)

(7) upgrade using the start-dfs.sh-upgrade command under hadoop_home/bin

(8) after the upgrade is completed, use hadoop fsck-blocks in hadoop_home/bin to check whether HDFS is complete and healthy.

(9) When the cluster is normal and running for a period of time (if you are sure there is no data loss, you can also final it immediately ), use hadoop dfsadmin-finalizeupgrade for version serialization (if you did not delete the original version of hadoop before that, you can use start-dfs.sh-rollback to return to the original version of hadoop)

 

2. hbase upgrade steps:

(1)when hadoopupgrade is successful, decompress hbase-0.94.1.tar.gz to the directory to be installed. Here I am/home/jxw and renamed it hbase

(2) configure the files in the conf file under the new hbase version (just as if you installed hbase for the first time)

(3) modify the environment variables, such as hbase_home, as needed.

(4) Start zookeeper

(5) start the new hbase version

(6) use Web monitoring or hbase shell to check whether data in hbase is complete

 

Now, the upgrade of hadoop and hbase has been completed. You can view the version of hadoop and hbase on the webpage or command.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.