HDFs Standalone Version Installation

Source: Internet
Author: User
Tags chmod mkdir xsl ssh file permissions iptables hadoop fs firewall

* * HDFs stand-alone installation

**
first, prepare the machine
10.211.55.8 9000, 50070, 8088
Second, installation
1. Installing the Java Environment

Export JAVA_HOME=/DATA/PROGRAM/SOFTWARE/JAVA8 export
jre_home=/data/program/software/java8/jre
Export Classpath=.: $CLASSPATH: $JAVA _home/lib: $JRE _home/lib 
export path= $PATH: $JAVA _home/bin: $JRE _home/bin
Source/etc/profile Make it effective

2, modify the hostname

Vi/etc/profile
Add 10.211.55.8 bigdata2

3. Turn off the firewall

Service iptables stop
with long shutdown firewall: chkconfig iptables off
View firewall status: Service iptables status

4. Adding Hadoop users and user groups

Create user group: Groupadd hadoop   
new Hadoop user and added to Hadoop User: useradd–g hadoop hadoop
setup password: passwd Hadoop

5. Download and install Hadoop

Cd/data/program/software
wget HTTP://MIRROR.BIT.EDU.CN/APACHE/HADOOP/COMMON/HADOOP-2.8.1/ hadoop-2.8.1.tar.gz 
Decompression: tar-zxf hadoop-2.8.1.tar.gz assign
hadoop-2.8.1 operation permissions to Hadoop users: Chown–r hadoop:hadoop hadoop-2.8.1

6. Create Data Catalog

Mkdir–p/data/dfs/name
mkdir–p/data/dfs/data
mkdir–p/data/tmp
Assign/data file permissions to Hadoop:chown–r Hadoop: Hadoop/data

7, Configuration Etc/hadoop/core-site.xml

cd/data/program/software/hadoop-2.8.1
        <configuration>
<property>
<name> fs.defaultfs</name>
<value>hdfs://bigdata2:9000</value>
</property>
< property>
<name>io.file.buffer.size</name>
<value>131072</value>
</ property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/data/ Tmp</value>
<description>abase for other temporary directories.</description>
</ property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value >*</value>
</property>
<property>
<name> hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
< /configuration>

8, Configuration Etc/hadoop/hdfs-site.xml

<configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:/data/dfs /name</value> <description>determineswhere on the local filesystem the DFS name node should store the Name tab Le. Ifthis is a comma-delimited list of directories then the name table was Replicatedin all of the directories, for redundancy . </description> <final>true</final> </property> <property> <name> Dfs.datanode.data.dir</name> <value>file:/data/dfs/data</value> <description> Determineswhere on the local filesystem a DFS data node should store its blocks. If thisis a comma-delimited list of directories, then data would be is stored in all nameddirectories, typically on different Devices.
Directories that does not exist areignored. </description> <final>true</final> </property> <property> <name> dfs.replication</name> <value>1</value> </property> &LT;PRoperty> <name>dfs.permissions</name> <value>false</value> </property> </ Configuration>

9, Configuration etc/hadoop/mapred-site.ml

<?xml version= "1.0"?>
<?xml-stylesheet type= "text/xsl" href= "configuration.xsl"?>
< configuration>
<property>
<name>mapreduce.framework.name</name>
<value> yarn</value>
</property>
</configuration>

10, Configuration Yarn-site.xml

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle </value>
</property>

11, Configuration Slaves

Bigdata2

12. Setting the Hadoop environment variable

Vi/etc/profile
hadoop_home=/data/program/software/hadoop-2.8.1
path= $HADOOP _home/bin: $PATH
Export Hadoop_home PATH
export hadoop_mapred_home= $HADOOP _home
export hadoop_common_home= $HADOOP _home
Export Hadoop_hdfs_home= $HADOOP _home
export yarn_home= $HADOOP _home
Export hadoop_common_lib_native_dir= $HADOOP _ Home/lib/native

13, SSH no password authentication configuration

Switch to Hadoop User: su hadoop
direct input CD will switch to the/home/hadoop root directory: cd
creation. SSH directory: mkdir. SSH
Generate key (always enter): ssh-keygen–t RSA
enter the. SSH directory: cd. SSH
copy a secret key: CP id_rsa.pub Authorized_keys
back to the root directory: CD
. Give. ssh700 permissions: chmod. SSH
to. ssh file 600 permissions: chmod. ssh/*
ssh bigdata2

14. Running Hadoop
First format the Namenode:bin/hadoop Namenode–format
To get everyone to look at Hadoop, we'll start all of our services: sbin/start-all.sh
Take a look at the starting service: JPS
Take a look at the management interface of HDFs: http://10.211.55.8:50070
See Hadoop running task: http://10.211.55.8:8088/cluster/nodes
15. Testing

Create a directory: Bin/hadoop fs–mkdir/test
Create a txt and put it under/test: Bin/hadoop fs–put/home/hadoop/first.txt/text
View files under directory: Bin/hadoop fs–ls/test

If the following error occurs during startup: You need to change the java_home in/data/program/software/hadoop-2.8.1/etc/hadoop/hadoop-env.sh to the absolute address.

[Hadoop@bigdata2 hadoop-2.8.1]$ sbin/start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 17/07/25 13:52:49 WARN util. nativecodeloader:unable to load Native-hadoop library for your platform ... using Builtin-java classes where applicable 17 /07/25 13:52:49 WARN Conf. Configuration:bad conf file:element not <property> 17/07/25 13:52:49 WARN conf. Configuration:bad conf file:element not <property> 17/07/25 13:52:49 WARN conf. Configuration:bad conf file:element not <property> 17/07/25 13:52:49 WARN conf. Configuration:bad conf file:element not <property> starting namenodes on [BIGDATA2] Bigdata2:Error:JAVA_HOME are
Not set and could not is found.
The authenticity of host ' localhost ' (:: 1) ' can ' t be established.
RSA key fingerprint is 24:e2:40:a1:fd:ac:68:46:fb:6b:6b:ac:94:ac:05:e3. Is you sure want to continue connecting (yes/no)?
Bigdata2:Error:JAVA_HOME is isn't set and could not being found. ^clocalhost:host Key VerificationFailed. Starting secondary namenodes [0.0.0.0] 0.0.0.0:error:java_home is no set and could not being found.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.