Hadoop2.3 installation process and problem solving, hadoop2.3 Installation Process

Source: Internet
Author: User
Tags builtin shuffle

Hadoop2.3 installation process and problem solving, hadoop2.3 Installation Process

Three servers: yiprod01, where 01 is namenode, 02 is secondarynamenode, and 3 are datanode

The configurations mentioned here for the three servers must be the same.

0. Installation prerequisites:

0.1 ensure that java

After installing java, you must have the JAVA_HOME configuration in. bash_profile.

Export JAVA_HOME =/home/yimr/local/jdk


0.2 ensure that three machines establish trust relationships. For details, see another article.


1. core-site.xml

<configuration>    <property>        <name>hadoop.tmp.dir</name>        <value>file:/home/sdc/tmp/hadoop-${user.name}</value>    </property>    <property>        <name>fs.default.name</name>        <value>hdfs://yiprod01:9000</value>    </property></configuration>


2. hdfs-site.xml

<configuration>    <property>        <name>dfs.replication</name>        <value>3</value>    </property>    <property>         <name>dfs.namenode.secondary.http-address</name>         <value><span style="font-family: Arial, Helvetica, sans-serif;">yiprod02</span><span style="font-family: Arial, Helvetica, sans-serif;">:9001</value></span>    </property>    <property>         <name>dfs.namenode.name.dir</name>         <value>file:/home/yimr/dfs/name</value>    </property>    <property>         <name>dfs.datanode.data.dir</name>         <value>file:/home/yimr/dfs/data</value>    </property>    <property>         <name>dfs.replication</name>         <value>3</value>    </property>    <property>         <name>dfs.webhdfs.enabled</name>         <value>true</value>    </property></configuration>

3. hadoop-env.sh

export JAVA_HOME=/usr/local/jdk1.6.0_27

4. mapred-site.xml

<Configuration> <property> <! -- Use yarn as the framework for resource allocation and task management --> <name> mapreduce. framework. name </name> <value> yarn </value> </property> <! -- JobHistory Server address --> <name> mapreduce. jobhistory. address </name> <value> yiprod01: 10020 </value> </property> <! -- JobHistory WEB address --> <name> mapreduce. jobhistory. webapp. address </name> <value> <span style = "font-family: Arial, Helvetica, sans-serif; "> yiprod01 </span> <span style =" font-family: Arial, Helvetica, sans-serif; ">: 19888 </value> </span> </property> <! -- The maximum number of parallel jobs that can be performed at a time during File Sorting --> <name> mapreduce. task. io. sort. factor </name> <value> 100 </value> </property> <property> ll <name> mapreduce. reduce. shuffle. parallelcopies </name> <value> 50 </value> </property> <name> mapred. system. dir </name> <value> file:/home/yimr/dfs/mr/system </value> </property> <name> mapred. local. dir </name> <value> file:/home/sdc/dfs/mr/local </value> </property> <property> <! -- Memory size required for each Map Task to be applied to RM --> <name> mapreduce. map. memory. mb </name> <value> 1536 </value> </property> <! -- JVM parameter of the Container applied for in each Map stage --> <name> mapreduce. map. java. opts </name> <value>-Xmx1024M </value> </property> <! -- Memory size required for each Reduce Task to be applied to RM --> <name> mapreduce. reduce. memory. mb </name> <value> 2048 </value> </property> <! -- JVM parameter of the iner applied for in each Reduce stage --> <name> mapreduce. reduce. java. opts </name> <value>-Xmx1536M </value> </property> <! -- Sorting memory usage limit --> <name> mapreduce. task. io. sort. mb </name> <value> 512 </value> </property> </configuration>

5. yarn-site.xml

<Configuration> <property> <name> yarn. nodemanager. aux-services </name> <value> mapreduce_shuffle </value> </property> <name> yarn. nodemanager. aux-services.mapreduce.shuffle.class </name> <value> org. apache. hadoop. mapred. shuffleHandler </value> </property> <name> yarn. resourcemanager. address </name> <value> yiprod01: 8080 </value> </property> <name> yarn. resourcemanager. scheduler. Address </name> <value> yiprod01: 8081 </value> </property> <name> yarn. resourcemanager. resource-tracker.address </name> <value> yiprod01: 8082 </value> </property> <! -- The total memory size that each nodemanager can allocate --> <name> yarn. nodemanager. resource. memory-mb </name> <value> 2048 </value> </property> <name> yarn. nodemanager. remote-app-log-dir </name> <value >$ {hadoop. tmp. dir}/nodemanager/remote </value> </property> <name> yarn. nodemanager. log-dirs </name> <value >$ {hadoop. tmp. dir}/nodemanager/logs </value> </property> <name> yarn. resourcemanager. admin. address </name> <value> yiprod01: 8033 </value> </property> <name> yarn. resourcemanager. webapp. address </name> <value> yiprod01: 8088 </value> </property> </configuration>

6. format namenode

java.io.IOException: NameNode is not formatted.
hadoop namenode -format


7. Problem Solving

7.1 32-bit database Problems

Performance:

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh14/08/01 11:59:17 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableStarting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/yimr/local/hadoop-2.3.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.yiprod01]sed: -e expression #1, char 6: unknown option to `s'-c: Unknown cipher type 'cd'The authenticity of host 'yiprod01 (192.168.1.131)' can't be established.RSA key fingerprint is ac:9e:e0:db:d8:7a:29:5c:a1:d4:7f:4c:38:c0:72:30.Are you sure you want to continue connecting (yes/no)? 64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not knownYou: ssh: Could not resolve hostname You: Name or service not knownVM: ssh: Could not resolve hostname VM: Name or service not knownloaded: ssh: Could not resolve hostname loaded: Name or service not knownhave: ssh: Could not resolve hostname have: Name or service not knownHotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not knownServer: ssh: Could not resolve hostname Server: Name or service not knownguard.: ssh: Could not resolve hostname guard.: Name or service not known
The reason is that when hadoop is downloaded, the default 32-bit library is compiled.

File libhadoop. so.1.0.0

Libhadoop. so.1.0.0: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped

Temporary solution:

Modify hadoop-env.sh under etc

Add the following two lines at the end

Export HADOOP_COMMON_LIB_NATIVE_DIR =$ {HADOOP_PREFIX}/lib/native

Export HADOOP_OPTS = "$ HADOOP_OPTS -Djava.net. preferIPv4Stack = true-Djava. library. path = $ HADOOP_PREFIX/lib"

But there are still the following warning

14/08/01 11:46:42 WARN util. NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable


Now hadoop can be started normally. In a separate article, we will introduce how to completely solve this problem.



Hadoop Installation Problems

Hadoop1.0 is now available
To put it simply
Isn't your java jdk environment variable set?
Perform the following steps:
1. Use the root user to install JDK or above.
2. Create a hadoop user with the root user
3. Install hadoop under a hadoop user (from 1 to 4 as you mentioned above)
4. Modify/home/hadoop/. bash_profile and set the JDK and hadoop environment variables.
5. Install ssh (ssh is also required for pseudo clusters)

Hadoop Installation Problems

The tutorial is old. The new version of hadoop STARTUP script is stored in sbin. Start-all.sh has been gradually deprecated, using the new startup script:
Sbin/hadoop-daemon.sh -- script hdfs start datanodesbin/hadoop-daemon.sh -- script hdfs start namenodesbin/yarn-daemon.sh start resourcemanagersbin/yarn-daemon.sh start proxyserversbin/mr-jobhistory-daemon.sh start historyserver [NOTE 1 ]. do not execute the above commands at will. You need to plan the namenode nodes, which nodes are datanode, Which node is resourcemanager, proxyserver, and historyserver.
[NOTE 2]. sbin/hadoop-daemon.sh -- script hdfs start datanode after execution can only start the current node;
Sbin/hadoop-daemons.sh -- script hdfs start datanode can start the specified datanode in etc/hadoop/slaves
[NOTE 3]. The latest version (hadoop2.2.0) of the startup script libexec/hadoop-config.sh has a bug, if you want to use
Sbin/hadoop-daemons.sh -- hosts your_host_files -- script hdfs start datanode start Node, pay attention to modify libexec/hadoop-config.sh 98th behavior:
98 elif ["-- hostnames" = "$1"] At the same time, be careful -- hosts your_host_files option. The user-specified your_host_files must be placed under etc/hadoop, however, only the file name is specified at startup, and no path name is included. This is also a defect of the startup script.
[NOTE 4]. You can also use
Sbin/hadoop-daemons.sh -- hostnames your_host_name -- script hdfs start datanode start a node
Good luck. You can ask me if you don't know.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.