CDH Upgrade Record (5.1->5.2) __hadoop

Source: Internet
Author: User
Tags safe mode sqoop
cm UpgradeOperation Dimension: Root Unified password do not mistakenly delete cluster backup file login Cmserver installed host, execute command: cat/etc/cloudera-scm-server/db.properties login PostgreSQL database psql-u scm-p 7 432 input Password: Back up cm Data: pg_dump-h cdhmaster-p 7432-u SCM >/tmp/scm_server_db_backup.$ (date +%y%m%d) Check/tmp for file generation, period guarantee TM P under file should not be deleted. Stop it Impala Hue Hive ServiceStop cm Server:sudo service cloudera-scm-server stop cm server dependent database: sudo service cloudera-scm-server-db stop if this cm se RVer agent on the run also stop: sudo service cloudera-scm-agent stop modify yum Cloudera-manager.repo file: sudo vim/etc/yum.repos.d/ Cloudera-manager.repo [Cloudera-manager]
# Packages for Cloudera Manager, Version 5, on RedHat or CentOS 6 x86_64
Name=cloudera Manager
baseurl=http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/5/
Gpgkey = Http://archive.cloudera.com/cm5/redhat/6/x86_64/cm/RPM-GPG-KEY-cloudera
Gpgcheck = 1 Installation: sudo yum clean all
sudo yum upgrade ' cloudera-* ' check: Rpm-qa ' cloudera-manager-* ' boot cm server database: sudo service cloudera-scm-server-db start Start cm Server:sudo service cloudera-scm-server start login http://172.20.0.83:7180/Install agent upgrade If you upgrade the JDK, the hbase shell will not be available. You need to reboot the CDH after java_home upgrade cm. CDH Upgrade stop cluster all services backup Namenode meta data: Enter Namenode dir, execute: Tar-cvf/root/nn_backup_data.tar./* Download parcels distribution-> Activate package-> turn off (not reboot) Open ZK service into HDFS service-> upgrade HDFs metadata namenode start meta data start remaining HDFS role Namenode response RPC HDFs exit Safe Mode backup Hive Metastore database mysqldump-h h ostname-ucdhhive-p111111 cdhhive >/tmp/database-backup.sql Enter hive service-> update hive metastore database scheme update Oozie SH Arelib:oozie->install Oozie share lib Create Oozie user sharelib create Oozie user Dir update Sqoop: Go to sqoop service->update sqoop update Sqo OP2 Server update Spark (slightly, you can first uninstall the original version, upgrade directly after the installation of the new version) Start cluster all services: zk->hdfs->spark->flume->hbase->hive->impala- >oozie->sqoop2->hue Distribution Client Files: Deploy client Configuration Deploy HDFS Client configuration deploy client con figuration DEPLoy hbase Client Configuration deploy yarn Client Configuration Deploy hive Client configuration Delete old version package: sudo vim/etc/pro File source/etc/profile jdk1.7.0_67-cloudera sudo yum remove bigtop-utils bigtop-jsvc bigtop-tomcat Hue-common sqoop2-cl Ient Start Agent:sudo service cloudera-scm-agent restart HDFS metadata update HDFS server->instance->namenode=> Action->finalize Metadata Upgrade Preparation: Agent automatic restart shutdown: Prior download parcals package root Unified password collation of what databases and backup commands need to back up?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.