Tarball installation CDH5.2.1 (a)--basic services Hdfs/mr2/yarn

Source: Internet
Author: User
Tags apache solr hdfs dfs hadoop fs

Recently the company cloud host can apply for the use of, engaged in a few machines to get a small cluster, easy to debug the various components currently used. This series is just a personal memo to use, how convenient how to come, and not necessarily the normal OPS operation method.     At the same time, because the focus point is limited (currently mainly spark, Storm), and will not be the current CDH of the various components are complete, just according to individual needs, and then recorded, Memo. While it may seem like an installation process, it is still necessary to understand the current CDH software stack, which revolves around the CDH5.2.1 version.
There are several reasons for choosing cdh5.2.x: 1. Integrated with MR2, also backwards compatible with MR1, so that yarn can be used to dispatch; 2. Integrated with Spark, which can take advantage of the functions of HDFs, yarn, This is going to be a great place to explore. 3. Inherited the Cloudera search feature, search originated in Apache SOLR and is a noteworthy place to download the above mentioned components here:http://www.cloudera.com/content/cloudera/en/documentation.html
Directory:0. Installation PreparationFirst, the configurationsecond, single-machine startThird, distribution configurationiv. Test cluster function
0, installation preparation before installation, routine inspection and configuration to do a good job, this is relatively simple, is more finely, each can find information on the Internet, do not repeat 1. Firewall 2. Modify/etc/hosts3. Install JDK and Configuration 4. sshetc
After the above content, select the master operation, unzip the file hadoop-2.5.0-cdh5.2.1.tar.gzConfigure the Soft chain config path/apps/conf/hadoop_conf-/apps/svr/hadoop/etc
First, the configuration we need to configure the Conf here:/apps/conf/hadoop_conf/hadoop
Details are as follows:
  • Conf/core-site.xml<property> <name>fs.defaultFS</name> <value>hdfs:/ /master:9000</value></property>conf/hdfs-site.xml<property> <name>dfs.namenode.name        .dir</name> <value>file:///apps/dat/hard_disk/0/dfs/nn</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/apps/dat /hard_disk/0/dfs/dn</value></property>conf/yarn-site.xml<property> <name>yarn.resou Rcemanager.address</name> <value>master:8032</value> </property> < Property> <name>yarn.web-proxy.address</name> <value>master:8042</va lue> </property> <property> <name>yarn.resourcemanager.scheduler.address           </name>     <value>master:8030</value> </property> <property> <name>ya Rn.resourcemanager.admin.address</name> <value>master:8141</value> &LT;/PROPERTY&G        T <property> <name>yarn.resourcemanager.webapp.address</name> <value>m Aster:8088</value></property>conf/slavesmasterslave1conf/hadoop-env.shexport JAVA_HOME=/apps/svr/ Jdk6



Second, before starting to start the use of HDFs for the formatBin/hdfsNamenode-formatAfter the format is complete, you will see the following file under the configured NN path [[Email Protected]atony-master hadoop]# ll/apps/dat/hard_disk/0/dfs/nn/
Total 4
Drwxr-xr-x 2 root root 4096 Dec 16:21 current
[Email protected] hadoop]# ll/apps/dat/hard_disk/0/dfs/nn/current/
Total 16
-rw-r--r--1 root root 351 Dec 16:21 fsimage_0000000000000000000
-rw-r--r--1 root root 16:21 fsimage_0000000000000000000.md5
-rw-r--r--1 root root 2 Dec 16:21 Seen_txid
-rw-r--r--1 root root from Dec 16:21 VERSION
Configuration complete can now be started on master for single-machine test start dfs:[[email protected] sbin]#./start-dfs.sh Start yarnsbin/start-yarn.shThird, distribution configurationVerify that the configuration file and configuration are distributed after the standalone startup is correct--Check for changes to the file for distribution use[Email protected] hadoop]# Ls-alt
Total 152
Drwxr-xr-x 2 1106 592 4096 Dec 30 17:26.
-rw-r--r--1 1106 592 Dec 17:26 Slaves
-rw-r--r--1 1106 592 3484 Dec 17:01 hadoop-env.sh
-rw-r--r--1 1106 592 4567 Dec 16:59 yarn-env.sh
-rw-r--r--1 1106 592 1197 Dec 16:18 Yarn-site.xml
-rw-r--r--1 1106 592 997 Dec 16:15 Hdfs-site.xml
-rw-r--r--1 1106 592 863 Dec 16:11 core-site.xml
View the new directory operation created by Master[Email protected] sbin]# history |grep "Mkdir-p"
556 Mkdir-p/apps/dat/hard_disk/0/dfs/dn
575 Mkdir-p/apps/logs/hadoop
689 Mkdir-p/apps/dat/hard_disk/0/dfs/nn
to the Datanode node, unzip, create the configuration path, SCP configuration filetar xvzf hadoop-2.5.0-cdh5.2.1.tar.gz && ln-s hadoop-2.5.0-cdh5.2.1 Hadoopln-s/apps/svr/hadoop/etc/apps/conf/hadoop_conf
SCP Slaves hadoop-env.sh yarn-env.sh yarn-site.xml hdfs-site.xml core-site.xml [email protected]:/apps/conf/hadoop_ Conf/hadoop
when the deployment is complete, start on the master nodesbin/start-all.sh
iv. Test cluster functiontest the function of HDFs and MR2
Bin/hdfs dfs-put etc/hadoop/tony
Bin/hadoop jar Share/hadoop/mapreduce2/hadoop-mapreduce-examples-2.5.0-cdh5.2.1.jar grep/tony/hadoop/tony/ TEST-MR2 ' Dfs[a-z. +

[Email protected] hadoop]# Hadoop fs-cat/tony/test-mr2/part-r-00000
14/12/31 16:16:56 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable
6 Dfs.audit.logger
4 Dfs.class
3 Dfs.server.namenode.
2 Dfs.period
2 Dfs.audit.log.maxfilesize
2 Dfs.audit.log.maxbackupindex
1 Dfsmetrics.log
1 dfsadmin
1 dfs.servers
1 Dfs.namenode.name.dir
1 dfs.file
1 Dfs.datanode.data.dir

yarn:http://master:8080nn:http://master:50070
Here, the basic services of the Hadoop cluster can be used


Tarball installation CDH5.2.1 (a)--basic services Hdfs/mr2/yarn

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.